How do policymakers around the world feel about Artificial Intelligence (AI)? Despite the buzz around new technology and applications, there’s still a gap between rhetoric and reality when it comes to public policy and services. In a survey on the state of AI among governments in mostly high-income countries, researchers found only 28 percent were scaling AI pilot projects into everyday use.
Unsurprisingly, preparedness to use AI in public services appears lower still in the Global South, according to the Government AI Readiness Index. Within Sub-Saharan Africa, which ranked the lowest among regions in the Index, Mauritius is the only country that’s published a national AI strategy, though Benin, Ghana, Nigeria, and Rwanda are currently developing strategies.
Despite public sector sluggishness in adopting AI technologies, this year feels like it could be a tipping point. As AI interfaces such as OpenAI’s ChatGPT, image generators like DALL-E 2 and Midjourney, and others penetrate the mainstream with their ease of use, these tools are becoming impossible to avoid. This breakthrough moment echoes the social media explosion two decades ago. The buzz is attracting investors as well as an increased interest in AI for Good by international organizations, foundations, and private sector partners who see win-win opportunities for efficiencies, innovation, and new markets.
From proving that ChatGPT can pass the Wharton School’s MBA test to tricking it into saying racist epithets, chatbots are a mirror into our souls and societies. Of course, AI runs on data, and so, predictably, this only works for places, people, and things that are represented in data. For example, the Mozilla Internet Health Report 2022 shows that since 2015, Egypt is the only African country represented in AI training data.
A lot of the recent fascination with AI chatbots comes from how easily these sophisticated models impersonate sentience. The puppet appears to have no strings, yet behind the curtain there are always humans. And the conditions under which these humans work can be pretty appalling, such as the Kenyans paid $2 an hour to process vast amounts of traumatizing content from the darkest corners of the internet.
Given the real potential and hyped-up buzz surrounding new technologies, the question for the development and policy communities becomes what to make of all these impressive uses of AI and whether advances in machine learning can be leveraged to improve people’s lives without worsening inequality.
The bigger picture on AI for public services
It’s easy to get hung up on AI chatbots as the most arresting recent and user-centric example of what these tools can do. But there are extensive real world applications of AI behind the scenes in the context of agriculture, economic development, healthcare, and many other areas.
The tangible applications and benefits of AI to policymakers and public services are now coming into focus in every sector. The mainstreaming of processes like Optical Character Recognition (OCR), which can snap photos of hand-written records and turn them into digital text, makes it possible to rapidly digitize decades of paper records and run further trends analysis on the converted information. Or take real-time language translation and interpretation: Meta’s latest AI model can translate 200 languages. It’s easy to imagine the applications of this technology, from simplifying international business transactions to making international dialogue among policymakers easier. Complex AI-driven trends analysis in areas like food security, poverty mapping, and industry has potential to make scarce resources go further to improve lives and livelihoods.
Yet amid the excitement of this breakthrough moment, we can’t lose sight of the need for a human-centered approach to handling the onslaught of AI tools. Here are three key risks to human rights posed by the next wave of Artificial Intelligence and suggestions on how the data for development community can mitigate them:
Lack of transparency and accountability means automated bias goes unchecked.
In the US, medical algorithms were found to put black people lower down on transplant lists when all other factors were the same—a shocking injustice given that black Americans are four times more likely to suffer kidney failure than their white peers. It’s easy to obscure culpability in decision-making when you defer to a computer, but algorithms are only as good as their training data, which have societal biases built into them. It’s difficult to seek redress for wrongs exacted by AI models. The vacuum in regulation and legal frameworks regarding use of AI tools persists, and these issues are likely to remain more acute in low- and middle-income countries due to resource constraints and lack of capacity, academic centers of excellence, and technical skills.
A pathway forward: There needs to be much more international cooperation and dialogue concerning the adaptation of international standards within non-U.S. and European contexts, much like the adaptation of privacy regulations in the wake of GDPR in Africa and other regions. We also need global advocacy campaigns to pressure politicians and businesses to create cultures of transparency to share data and decision-making that underpin algorithms.
Use of AI will concentrate wealth among early adopters.
As AI revolutionizes markets in healthcare, law, publishing, commerce, and other industries, high-value job opportunities and profits are likely for those who can capitalize on its labor-saving potential while lost employment opportunities will befall those who currently engage in routinized work that can be replaced by algorithms. The most unenviable and routinized tasks that humans are needed for, such as manually filtering hate speech in AI training data, will be off-shored to places where wages are lower.
A pathway forward: Many of the constraints in AI adoption in the Global South are structural, but as the Index report emphasizes, every country can and must improve its AI capabilities to address the scale of the current challenges the world is facing. All countries need global investment in data science capacity-building efforts and academic offerings that are fit for the new reality we are inhabiting. Governments and donors must therefore dedicate more funding to data systems that support ethical data use and promote participation and inclusion from start to finish. Countries also need to update labor protections to be fit-for-purpose for tech industries so that, for example, “ghost workers” who may be harmed by unsavory working conditions, can gain better protections.
Opaque AI systems are ripe with potential for abuse.
It’s taken a quarter century for us to even begin to understand the full social potential and impact of the internet on societies. Mainstream AI will take this to the next level—for better and worse. From digital warfare, terrorism, and ramped up surveillance to supercharging social, religious, and ethnic tensions, the age of deep fakes, chatbots, and algorithmic bias will present new challenges in every society, but especially in those without robust regulatory frameworks and with limited technical and critical capacities.
A pathway forward: In order to preserve human rights, dignity, and agency we need to be ready for nefarious attempts to leverage this technology to sow discord and to facilitate abuse of marginalized or disadvantaged groups. As a community, we must be clear about what we stand for and are striving toward instead of merely defending the status quo. AI for Good and other like-minded initiatives are working to map this terrain.
Ultimately, wide-spread data confidence is a building block of a fair data future. Technology should be in the service of human wellbeing and agency, rather than a force that consolidates power. If we are pragmatic about the opportunities and clear-eyed about the risks, then policymakers and practitioners can use AI’s zeitgeist moment to push for measures that help leverage AI technology to serve humanity and foster dignity rather than to undermine it.