Panel
1. Uneven Geographies, Ecologies, Technologies and Human Futures
AI governance efforts have focused on safety, algorithmic discrimination, privacy and explainability which are extremely important. Yet, it is equally essential that the industry remains competitive, upholds the values of liberal democracy, does not compromise national security, and prioritises sustainability. Effective AI governance frameworks should factor in these considerations and strengthen the global harmonisation of governance efforts. The development of AI systems needs inputs such as data, computation resources, models, and applications. This applies to purpose-specific machine learning applications as well as general-purpose AI systems. These inputs can be visualised as stages in a supply chain, with data and computation contributing to the model, which, in turn, supports the applications. While the ideal situation would be to have competition at each stage in the supply chain, in practice, many of these stages are vertically integrated and controlled by a single company. It is therefore vital for the sector to stay competitive and prevent early entrants from dominating through regulatory influence. This competition also guarantees the existence of AI models that prioritise transparency, explainability, and accessibility, which contribute to upholding the values of a liberal democracy. This paper suggests that in a rapidly changing world order, a resilient supply chain for the building blocks of AI systems is essential to maintain techno-strategic autonomy. The environmental consequences of training and operating these models, including their energy and water use, must also be factored into governance decisions. Based on these considerations, this paper lays down a first principles approach for reimagining AI governance.
Bharath Reddy
The Takshashila Institution, India