
Ethics, Philosophy and Politics of AI for Sustainability ‘SustAIn’
30 June – 03 July 2026
A five-day interdisciplinary summer school on ethical, philosophical, and political challenges of AI for sustainability, combining lectures and workshops on trust, bias, risk, and environmental impact.
Keywords: Artificial Intelligence, Sustainability, Ethics of AI, Explainability, Bias, Risk, Trust, Science and Technology Studies, Policy-Making, Environmental Governance, Responsible Innovation
Location
ETH Zurich
Participants
Application is open to Master and PhD Students of the member universities from the IDEA League Alliance.
Expenses
Travel expenses for students coming from abroad will be covered by each IDEA League partner university.
Lunches, one social dinner and accommodation for all students will be covered by the hosting university (ETH Zurich).
Requirements
- Short CV (résumé)
- Motivation letter
The call for sustainable development is the outstanding challenge of the 21st century. The demand of planning the present human society in such a way that it will not undermine the well-being of the future generations, without damaging the environment, has become even more pressing in light of the negative effects of climate change and the threat of an ever growing and fast-moving world population. Facing impending and long-terms risks both at the socio-economic and environmental levels requires timely action from the scientific community as well as from each human member of society. Due to the multi-faceted relationship between human activities and natural resources as well as to the diversity of socio-economic conditions across the planet, policy-making for sustainability must deal with a complex system of data collection and analysis. Artificial Intelligence (AI) has the ability to handle very large and highly complex datasets so as to offer advice in decision-making processes. As such, it can serve as a fruitful tool for pursuing the declared Sustainable Development Goals, and in fact it has already witnessed an increase of practical applications. For instance, AI can contribute to environmental sustainability by improving mitigation measures to reduce carbon emissions and by developing effective strategies for adaptation to climate change; moreover, powerful computational means aid to find optimization strategies for sustainable uses of natural resources and for fair distributions of clean water and electricity. At the socio-economic level, AI has also become a valuable source of clinical information, as attested by its appeal in precision medicine and in the design of healthcare plans. Not withstanding such spreading enthusiasm for the potentialities of AI, its actual ability to enhance sustainability suffers from various limitations, which are subject to open discussion in the scientific, philosophical and sociological literature.
1) For one, the opacity of AI algorithms reflects deep uncertainty about the validity of their results, thereby prompting the question of how to enhance explainability.
2) Second, AI techniques are known to be affected by different types of biases. Hence, they can lead to unjust and unwarranted inequalities in the collection, analysis, and usage of data, and thus reinforce social inequality. That raises the issue of how, and when, one can trust AI-based results approaches for making fair and sustainable decisions.
3) What is more, lack of trust appears particularly alarming when it comes to taking decisions under risk of adverse, possibly catastrophic, events. As the use of AI-related technologies is increasingly widespread across society, awareness of risk and trust in AI have an impact at the broader public level too, over and above pending disagreements among the community of experts (i.e. sustainability scientists and AI engineers).
4) Finally, and perhaps ironically, there is a sort of paradox concerning sustainable AI in that, while promising to aid with sustainable decisions, computationally powerful algorithms have heavy environmental and financial costs. In fact, to borrow terminology introduced by Wynsberghe (2021), beside the prospects of developing “AI for sustainability”, one must also face the problem of the “sustainability of AI”. Given the global goal of reducing energy consumption, this alleged paradox poses ethical and societal questions about how to trade off the costs of training and tuning AI-based algorithms with the environmental costs of achieving other sustainable goals, like assuring a wider and equal distribution of electricity and gas, or even with the financial costs to allocate resources for healthcare.
Such limitations of sustainable AI require further work to provide a better understanding of the social and environmental sustainability of AI and an extensive discussion of their consequences for the relevant processes of policy making. To this end, the contributions of the Social Sciences and Philosophy are needed to develop a scientifically informed ethics of AI for sustainability.
Schedule
JUNE 30, 2026
9:30-10:00 Welcome and Introduction
Session on Philosophy of Artificial Intelligence
10:00-11:00 Lecture 1. Karl de Fine Licht (Chalmers): “Trust, Trustworthiness, and Artificial Intelligence”
11:00-11:30 Coffee break
11:30-12:30 Lecture 2. Andrea Gammon (Delft): “AI and Sustainability”
Lunch
Session on Uncertainty and Risk
14:30-15:30 Lecture 3. Giacomo Zanotti (Milan): “Facing uncertainty: AI systems as experimental technologies”
15:30-16:00 Coffee break
16:00-17:00 Lecture 4. Benham Taebi (Delft): “Risk and normative uncertainties“
19:00 Social dinner
JULY 1, 2026
Session on Politics and AI regulations
10:00-11:00 Lecture 5. Stefan Böschen (Aachen): “Always trouble with risks: ambitions and restrictions of risk-based approaches of AI-regulation“
11:00-11:30 Coffee break
11:30-12:30 Lecture 6. Ben Wagner and Marie-Therese Sekwenz (Delft): “Risk management for Democracy: the EU Digital Services Act”
12:30-14:30 Lunch
Session on Ethics of AI and Responsibility
14:30-15:30 Lecture 7. Alessandro Blasimme (Zurich): “Ethics in the age of artificial intelligence: rethinking norms, rights and responsibilities”
15:30-16:00 Coffee break
16:00-17:00 Lecture 8. Saskia Nagel (Aachen): “Reflection on taking responsibility”
17:30 Field trip
JULY 2, 2026
10:00-11:00 Discussion led by Fabio Fossa (Milan) on “Moral dilemmas of Sustainable AI”
11:00-11:30 Coffee break
11:30-13:00 Workshop on final group presentations
Students: Rest of the day off
Faculty: Meeting of the Ethics Working Group + dinner
JULY 3, 2026
10:00-12:30 Group presentations
12:30-13:00 Closing remarks and final greetings
Learning Objectives
- Understand ethical and political dimensions of AI for environmental sustainability
- Analyze risks, biases, and uncertainty in AI systems
- Develop critical thinking on explainability and trust
- Apply philosophical and STS frameworks to real-world cases
- Promote responsible and sustainable use of AI technologies
