In the debate about AI technologies, the focus usually lies on their development and adoption and less on the risks and challenges associated with them. AI is seen as key to solving major global challenges, while the pitfalls of AI governance with regard to its ethical and responsible use are ignored.
AI technologies hold great promise. However, their solutions are overshadowing the handicaps, such as the need for legitimate institutions, regulatory frameworks to govern AI technologies and adequate human capacities and infrastructure to help monitor and audit the impact of AI on humans. Thus, it is essential to develop an AI governance blueprint that meets the key requirements for the good deployment of AI and can be integrated into the developmental agenda of developing nation states. AI governance in defining policies and establishing accountability for the creation and deployment of AI systems should be differentiated for each sector. The blueprint could act as a starting point.
AI and the cause for optimism in the developing world
The developing economies are excited by the prospects of AI technologies and their ability to provide planning intelligence for key development challenges in spheres such as water, food and health security. AI is viewed as a smart solution approach by the developing nation states, which will help them to join the fourth industrial revolution or, simply put, to take advantage of the global digital economy. AI is seen as a useful tool to circumvent existing economic inefficiencies such as corruption within the system and to provide clean access to public services. One example is the AI Chatbot that helps Kenyans interact with local authorities to prepare for, respond to and recover from disasters. The scope of AI solutions for addressing global challenges makes it too enticing to miss out on.
But this is just a small-scale AI application, and too often the developing world only focuses on the good solutions which AI systems offer. It is important to note that most of the research and development in AI technologies – whether it be algorithms design, data digitisation, digital infrastructure development or ethics and the impact of AI on society – is taking place in the developed world. The developing world is trying to chase after and understand these AI-based technologies without putting adequate policies and regulations in place. For example, 193 nation states have ratified the Ethics of Artificial Intelligence initiated by UNESCO. However, it is mostly EU countries that are discussing and instituting policies to implement AI ethics. The adoption of AI systems will therefore remain a challenge, and the short-cut is to buy off-the-shelf solutions from tech companies without adequate governance structures in place and minimal understanding of the risks and challenges associated with AI systems. For example, the black-box nature of AI algorithms, i.e. the difficulty of explaining in simple terms the process of how A reached B, makes AI systems and their audit and evaluation nearly impossible.
AI governance and its dependence on data governance
As AI is driven by data, the more high-quality standardised datasets are available, the better the AI algorithm will be at presenting accurate results. Data presents multiple challenges and personal data must be treated with the utmost confidentiality and care. For the use of personal data impacts a citizen’s fundamental rights to privacy and self-determination. Other challenges such as data sovereignty, the security of datasets and questions around the ethical and responsible use of data must also be given due consideration. The performance dependency of AI systems on high-quality machine-interpretable datasets makes the commercial application and implementation of AI systems difficult, as the algorithm requires unbiased and comprehensive datasets.
There are usually low-quality and distributed datasets in the developing world with little or no standardisation. For example, in India – which has a good digital strategy mandate – the data itself is distributed and not standardised, as each provincial government utilises a different methodology with unique parameters for data collection. The public sector institutions generally lack the knowledge, human capacity or technical expertise to comprehend the data governance required to foster efficient use of the AI systems. This creates a lack of trust among the public as well as transparency issues about AI’s possible threats and risks, especially with regard to the use of datasets by government agencies. Economically it is not viable for one nation state to invest human and financial resources as well as capacities in developing good-quality AI-compatible datasets. Thus, the lack of good-quality, standardised datasets remains a key bottleneck and has limited the commercial use of AI systems in the Global South.
The risks of AI systems demand robust AI governance frameworks
As the use of AI requires highly specialised technical expertise and its processes are complex, the use of AI systems for a large population demands in-depth understanding of the risks and challenges posed by them. For example, the unethical use of AI systems can exacerbate existing inequalities within a society and widen the economic divide if these systems are biased and opaque. Where the AI systems are being used to run public welfare programmes, this can result in the wrong distribution of welfare benefits if the training datasets of AI algorithms are biased towards a particular section of the population. Thus, the first step towards adopting any AI-based system in the public sphere in the developing world would be to devise robust and tested AI governance frameworks that incorporate a nationalised data protection regime, address the issues surrounding the fundamental rights of citizens and ensure the judicious use of AI systems with regard to data sovereignty and its citizens’ privacy concerns. This is mostly either at the discussion stage or absent in the developing world.
AI systems could aggravate malpractices such as censorship, the biased distribution of financial services, false arrests, the prosecution of individuals (by facial recognition) and misuse of surveillance. AI bias has become a major global concern and one of the key flaws is due to the lack of diversity in datasets used during the training and development phase of AI applications. Thus, the unethical and unmonitored use of AI applications can lead to wrong identification and biased judgement. The ethical and responsible use of AI and ethical principles such as transparency, bias, accountability, fairness, privacy and security with regards to the socio-economic framework must be made an essential requirement in the adoption of any AI system. In the absence of such regulatory and legal frameworks for AI, the risks and challenges can multiply very fast. All AI governance frameworks must ensure that citizens’ fundamental rights are to the fore when instituting legal and regulatory processes in the context of AI systems.
AI infrastructure: AI systems implementation is expensive
Good and reliable AI systems demand good AI infrastructure. Some aspects of essential and critical AI infrastructure, such as data repositories, the setting-up of AI super computers or processing power units such as local data centres and network infrastructure (competitive cloud computing) are either absent or inadequate in developing nations. Most of these big-ticket items are thus being provided or supported by private big-tech companies from the US, China or the EU. As AI infrastructure installation by private companies could pose a security risk, there is a need for governance mechanisms that ensure that big-tech companies do not misuse data or engage in unethical practices and that AI infrastructure is not compromised in any way. There must be demarcation and rightful ownership of AI infrastructure guaranteed by a state’s compliance and audit procedure. For examples, clear-cut national guidelines must be established for cloud computing, as currently most data are being stored in clouds (as it is easier to access and use).
The inclusion of big-tech companies in providing or building good AI infrastructure must rightfully be aligned to the administrative laws, legal processes and fundamental rights of citizens in a democratic nation state. For example, when establishing centres of excellence (CoE) in data science and AI, an adequate knowledge and training component must be included to faciliate the transfer of knowledge from vendor to stakeholder, cross-learning, as well as adequate checks and balances and good documentation for AI infrastructure ownership to be transitioned from contractor to the stakeholder. In the absence of good AI governance frameworks, the building of AI centres by the private sector may give rise to new ethical risks to privacy, security and public trust.
AI governance: Framework recommendations for a developing nation state
The governance of AI systems is complex, and the developing nations must invest in basic conceptual and normative frameworks as well as create a multilateral and multi-layered approach best suited to their individual requirements. Identifying the risks and challenges associated with AI and serving the needs of a modern heterogeneous society must be rooted in all AI governance frameworks.
STEP 1: Framing an AI governance approach requires investment – in terms of both funding and time – in research, technical expertise and sector-specific knowledge. It is highly recommended that the state invests with a medium and long-term goal in building a minimum level of institutional capacities, domain expertise and technology partnerships in research and development, underpinning the state’s national interest.
STEP 2: A partnership framework in AI governance could help facilitate the developmental agenda of the Global South and foster inclusion by identifying barriers to the adoption of AI technologies and by setting up interoperable processes and procedures that underline the benefits of AI for the social good.
a. To help overcome constraints in resources and capacities in AI governance, regional governance frameworks, for example in South Asia and East Africa, would be economically advantageous for developing economies. A common minimum level of AI governance architecture that underlines the requirements of a regional bloc could be a good start. A partnership model approach led by regional associations such as the South Asian Association for Regional Cooperation (SAARC) and Association for Southeast Asian Nations (ASEAN) would be best placed to address the regional aspirations aligned to the sustainable development goals (SDGs) in a national context.
b. Lessons learned from cross-learning between the North and the South should be drawn upon in a multilateral developmental partnership to test and implement a common minimum AI governance framework. Potential partners include Singapore, the US, Canada, the EU and Japan. This would help to integrate a multi-layered ethical and responsible use of AI governance practices and to identify sector-specific AI systems requirements.
More information about AI in the following articles: Proposal for a regulation of the European Parliamnet and of the Council, OECD AI Policy Observatory and Training on AI Innovation for Disaster Risk Reduction in Kenya.
About the author:
Gaurav Sharma works as an advisor for AI at GIZ in India and is active in several different forums, networks and platforms dealing with the climate crisis and structural unemployment as well as supporting young leaders in India while fostering the exchange of ideas.
Published on May 4, 2022.