This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
How Unicef balances disruptive AI with safety
For Unicef’s AI lead, disruptive artificial intelligence and safety is not “paradoxical”.
“Safety and responsibility and the good use of something are not contradictory,” says Unicef’s Irina Mirkina.
If it were, Mirkina adds, then it would be like believing “that cars should not have seatbelts because seatbelts somehow intervene with how good cars are.”
In other words, she believes that AI regulations will not prevent it from becoming a technology that will benefit the world, but, like a car, without any safety precautions, AI can be hazardous, particularly if mishandled.
“If we are building algorithms that supposedly help humans in their work, lives, healthcare, education, but in a way that harms some people, we are not actually helping,” she says.
Speaking at London Tech Week, Mirkina explained that, as a humanitarian aid organisation aimed at helping children globally, Unicef approaches artificial intelligence from a human rights perspective.
“When we are talking about technology of any kind, including AI, it’s about where it makes human lives better,” she says. “Where we are building a world in which humans will have better lives. Not robots, not machines, but humans and children.”
Last year, Unicef found that growing inequities, conflicts and climate change have slowed progress in aiding children’s health around the world.
For lack of access to food specifically, the organisation found that food poverty affects 181 million children under five in its most severe form, and around 200 million children under five suffer from stunting or wasting (caused by malnutrition).
Additionally, while the under-five mortality rate has fallen by over half since 2000, almost five million under-fives died in 2022.
Unicef managed to reach 6 million children with severe wasting treatment in the 15 acutely affected countries last year, exceeding the target of 4.5 million.
With statistics like these, reaching more children with support from technologies such as AI may help – but it needs to bring value for social good, health, education, as well as saving and protecting lives, says Mirkina.
“It also means using technology responsibly,” she adds. “Making sure that the systems we are building are safe, explainable, unbiased, and trustworthy.”
Where charities and enterprises align
From Mirkina’s perspective, there is not a big difference between Unicef’s approach to AI as a public international company and what private enterprises do.
“It’s still about building a robust system of processes and governance on top of ethical principles,” she says.
Three years ago, Unicef published policy guidance on AI for children, which Mirkina believes is still valid today.
The guidance, published in November 2021, a year before the breakthrough of generative AI solutions such as ChatGPT, outlines the opportunities and risks around AI, as well as “requirements for child-centred AI.”
The report suggests firms, governments and policymakers that develop and implement AI ensure that the tool is child-inclusive, fair, secure, and accessible.
But “how do we operationalise the policies in practice?” says Mirkina.
According to the AI lead, each day the team reviews and mitigates risks as it builds its products.
“It’s all about assessing and mitigating risk and impact for every solution and building proper accountability systems.”
AI use cases
Like many other organisations, Unicef is creating its own tools in partnership with technology firms.
“We are not a technology company, we never will be, but what do we actually do with AI?” asks Mirkina.
She reveals that the charity is building software tools that range from support for healthcare systems, and assistive technologies in education, social work and sustainability.
For instance, it’s combining satellite data and science to identify where underground, clean water is in dry regions of the world. Called ‘Unicef’s More Water More Life’, the initiative uses satellite imagery and conventional exploration techniques to map deep “aquifers” – bodies of permeable rock which contain groundwater.
According to the charity, this data-driven approach saves time and money that may have been spent unsuccessfully drilling for water in other locations.
In fact, in 2021, in a pilot the technology found drilling success rates almost doubled in Ethiopia, increasing from 50% to 92%, improving water access for 1.2 million people, including 74,000 children.
Similarly, the charity is using satellite imagery to map every school in the world, and together with the International Telecommunications Union (ITU), it’s ensuring children have access to education and knowledge.
According to Mirkina, the organisation’s global map of schools, as well as its deep learning techniques, will help identify gaps in internet connectivity, and serve as evidence when advocating for connectivity and to help national governments optimise their education systems.
The map will also help Unicef measure vulnerabilities and improve its emergency response and resilience against natural disasters and crises.
“For us, it’s balancing the impact and finding valuable use cases,” says Mirkina.
“The value is what we can achieve together when we bring expertise together, ethicists, technologists, human rights workers, with actual practical expertise on the ground, and scale this across many countries, I think that’s incredible.”
#BeInformed
Subscribe to our Editor's weekly newsletter