Rational Agency and Uncertain Futures: AI’s Role in the Gulf Region
This workshop explores how Artificial Intelligence is transforming rational agency—the ability of individuals, institutions, and societies to make coherent, goal-directed decisions—within environments shaped by complex and intersecting uncertainties. These uncertainties extend beyond incomplete information to include unpredictable technological advances, volatile global markets, environmental stresses, shifting socio-economic conditions, and ...
Abstract
This workshop explores how Artificial Intelligence is transforming rational agency—the ability of individuals, institutions, and societies to make coherent, goal-directed decisions—within environments shaped by complex and intersecting uncertainties. These uncertainties extend beyond incomplete information to include unpredictable technological advances, volatile global markets, environmental stresses, shifting socio-economic conditions, and geopolitical change.The Gulf Cooperation Council (GCC) offers a distinctive and timely context for this inquiry. Rapid AI adoption is embedded in ambitious national development strategies, large-scale digital infrastructure investments, and accelerated economic diversification agendas. In this dynamic setting, AI is not only a tool for decision support but a force that reframes decision-making processes, reshapes strategic priorities, and influences governance structures.Drawing on decision theory, probabilistic reasoning, reinforcement learning, and governance studies, the workshop will investigate how AI affects human and institutional rationality under uncertainty. Sessions will address uncertainty modeling, ethical and policy implications, and sector-specific case studies in energy, finance, healthcare, climate adaptation, and smart city planning. The aim is to bridge global theoretical debates with the Gulf’s unique realities, producing insights and recommendations to ensure that AI’s growing influence enhances resilience, transparency, and cultural alignment in an unpredictable future.
Description and Rationale
Context
Artificial Intelligence (AI) has moved from experimental innovation to a decisive force reshaping economies, governance, and social systems. In the Gulf Cooperation Council (GCC) states, AI is embedded in national development strategies such as Saudi Arabia’s Vision 2030, the UAE’s National AI Strategy 2031, and Qatar’s Artificial Intelligence Strategy. These initiatives position AI not only as a technical advancement but as a transformative driver of economic diversification, public sector modernization, and strategic resilience (Albous, Al-Jayyousi, & Stephens, 2025).
The GCC’s AI trajectory, however, unfolds amid severe, multi-dimensional uncertainty. As Andreoletti, Chiffi,and Taebi (2022) show, uncertainty may be factual, methodological, or normative, often resisting probabilistic modelling. This includes unpredictable technological advances (Nordström, 2022), volatile global markets, environmental pressures, and shifting socio-political expectations in youthful, highly connected societies. Chiffi and Pietarinen (2019) emphasise that decision-making in such contexts is shaped by both epistemic and non-epistemic values—critical where AI influences high-stakes governance.
Philosophical debates provide important policy insights. Roser (2017) questions the practical relevance of the risk–uncertainty distinction, advocating the use of subjective and epistemic probabilities even when imperfect. Kozyreva and Hertwig (2021) advance an ecological rationality view, showing when heuristics, rather than probabilities, are optimal. Ongaro (2025) highlights persistent disagreement-based uncertainties that complicate governance. Ethical analyses by Weatherson (2014), Zhao (2023), and Lee-Stronach (2021) examine how moral reasoning under uncertainty balances risk, value, and constraints, while Frank (2019) warns that ignoring “fat tail” catastrophic risks can violate scientists’ duties—parallels that apply directly to AI safety.
In AI research, uncertainty is addressed via probabilistic inference, Dempster–Shafer theory, fuzzy logic, and other models (Saffiotti, 1987; Kanal & Lemmer, 2014). Zanotti, Chiffi, and Schiaffonati (2023, 2024) adapt disaster risk analysis to AI, developing hazard–exposure–vulnerability frameworks for contexts where ex-ante and ex-post assessments are difficult. Langer and Valera (2023) add a human-centred perspective, showing that actionable explanations can improve user responses, though perceived actionability can have mixed effects.
Recent work on Artificial General Intelligence (AGI) expands this agenda, exploring its macroeconomic, political economy, and ethical implications. Studies model AGI’s effects on capital–labour substitution,sustainability, social contracts, and governance under volatility (Stiefenhofer, 2025; Stiefenhofer & Chen, 2024; Stiefenhofer, Deniz, Chen, Qian, & Almehthel, 2024; Stiefenhofer & Deniz, 2025; Giesl & Deniz, 2025), alongside technical analyses of stability, probabilistic modelling, and machine learning dynamics.