Harnessing AI for good: opportunities and challenges

The AI for Good Global Summit 2024 took place on May 30-31 in Geneva, bringing together a group of over 2,500 participants representing some 145 countries.  In her opening remarks, ITU Secretary-General Doreen Bogdan-Martin set the tone for the event by explaining the need for inclusivity in AI development.  She said, “In 2024, one-third of humanity remains offline, excluded from the AI revolution, and without a voice. This digital and technological divide is no longer acceptable.”  The summit showcased examples of AI applications, such as Bioniks, a Pakistani-led initiative designing affordable artificial limbs, and Ultrasound AI, a US-based women-led effort The post Harnessing AI for good: opportunities and challenges appeared first on DailyAI.

Jun 6, 2024 - 16:00
 13
Harnessing AI for good: opportunities and challenges

The AI for Good Global Summit 2024 took place on May 30-31 in Geneva, bringing together a group of over 2,500 participants representing some 145 countries. 

In her opening remarks, ITU Secretary-General Doreen Bogdan-Martin set the tone for the event by explaining the need for inclusivity in AI development. 

She said, “In 2024, one-third of humanity remains offline, excluded from the AI revolution, and without a voice. This digital and technological divide is no longer acceptable.” 

The summit showcased examples of AI applications, such as Bioniks, a Pakistani-led initiative designing affordable artificial limbs, and Ultrasound AI, a US-based women-led effort improving prenatal care.

These contribute to a vast body of projects that truly showcase how AI can accelerate disease diagnosis, help develop new drugs, provide movement to those who lost it through injury disease, and much more. 

AI For Good also dived into how AI can help attain the UN’s Sustainable Development Goals (SDG), which set out broad and far-reaching plans to grow and modernize less-developed nations while alleviating poverty, climate change, and other macro problems. 

Melike Yetken Krilla, head of international organizations at Google, discussed several projects where Google data and AI are being used to track progress toward the SDGs, map it around the globe, and collaborate with the World Meteorological Organization (WMO) to create a flood hub for early warning systems.

AI is also helping conservationists protect the environment, from the Amazon rainforest to Puffins off British coastlines and salmon in Nordic waterways

AI’s potential for good – as per the Summit’s sentiment – is clearly substantial indeed.

But as ever, there is another half to the story. 

AI’s push and pull

Rather than one-way traffic, AI tempts to both shatter and accelerate digital divides.

For one, there is strong evidence that AI entrenches currently existing divisions between more and less technologically advanced countries. Studies from MIT and the Data Provenance Initiative found that most datasets used to train AI models are heavily Western-centric.

Languages and cultures from Asia, Africa, and South America remain primarily underrepresented in AI technology, resulting in models failing to accurately reflect or serve these regions.

Moreover, AI technology is expensive and hard to develop, and a select few companies and institutions undoubtedly hold the majority of the control. 

Open-source AI projects provide a lifeline to companies globally to develop lower-cost, sovereign AI but still require computing power and technical talent that remains in high demand worldwide. 

AI model bias

Another tension in this push and pull is bias. When AI models are trained on biased data, they inherently adopt and amplify those biases. 

This can lead to severe consequences, particularly in healthcare, education, and law enforcement. 

For instance, healthcare AI systems trained predominantly on Western data may misinterpret symptoms or behaviors in non-Western populations, leading to misdiagnoses and ineffective treatments.

Researchers from leading tech companies like Anthropic, Google, and DeepMind have acknowledged these limitations and are actively seeking solutions, such as Anthropic’s “Constitutional AI.” 

As Jack Clark, Anthropic’s policy chief, explained: “We’re trying to find a way to develop a constitution that is developed by a whole bunch of third parties, rather than by people who happen to work at a lab in San Francisco.” 

Labor exploitation

Another risk to harnessing AI for good is cases of labor exploitation for data labelers and annotators, whose task is to sift through thousands of pieces of data and tag different features for AI models to learn from.

The psychological toll on these workers is vast, especially when tasked with labeling disturbing or explicit content. This “ghost work” is crucial for the functioning of AI systems but is frequently overlooked in discussions about AI ethics and sustainability.

For example, former content moderators in Nairobi, Kenya, lodged petitions against Sama, a US-based data annotation services company contracted by OpenAI, alleging “exploitative conditions” and severe mental health issues resulting from their work.

There have been responses to these challenges, showing how AI’s threat to vulnerable populations can, with collective action, be stamped out. 

For example, projects like Nanjala Nyabola’s Kiswahili Digital Rights Project aim to counteract digital hegemony by translating key digital rights terms into Kiswahili, enhancing understanding among non-English speaking communities in East Africa. 

Similarly, Te Hiku Media, a Māori non-profit, collaborated with researchers to train a speech recognition model tailored for the Māori language, demonstrating the potential of grassroots efforts to ensure AI benefits everyone.

A balancing act

The push and pull of AI’s benefits and drawbacks will be tricky to balance in the forthcoming years. 

Rather than representing a new paradigm of international development, AI is a continuation of decades of discourse investigating the impacts of technology on global societies. It’s both highly universal and highly localized. 

Large-scale AI tools like ChatGPT can provide a ‘blanket’ of encyclopedic knowledge and skills that billions can access worldwide.

Meanwhile, smaller-scale projects like those described above show that, combined with human ingenuity, we can build AI technology that serves local communities. 

Over time, the key hope is that AI will become simultaneously cheaper and easier to access, empowering communities to use it as they like and, on their terms, with their rights. Of course, that could also include rejecting AI altogether. 

AI – both the generative models created by tech giants and traditional models created by universities and researchers – can certainly offer societal benefits. 

There is much to be skeptical and hopeful about. Such was the promise of other technologies before AI, from the printing press to the combustion engine.

AI might extend more deeply into society than other technologies, but it remains under human control for now.

The post Harnessing AI for good: opportunities and challenges appeared first on DailyAI.