
AI for Good 2025: Can a robot read your mind?
Artificial intelligence is no longer a distant promise. It is a present force, reshaping industries, societies, and the very fabric of daily life. Yet as AI’s influence accelerates, so too do the risks of leaving its trajectory unchecked.
This year’s AI for Good Global Summit opened in Geneva with a clear message: we need robust standards, international cooperation and a push to build people’s skills for AI’s benefits to be realized for everyone worldwide.
The International Telecommunication Union (ITU) organizes the summit with a broad swathe of United Nations partners as a global showcase for responsible AI.
As ITU Secretary-General Doreen Bogdan-Martin warned, the greatest risk is not AI itself, but deploying it without fully understanding its implications for people and the planet.
“We all need the skills to understand and question the systems we increasingly interact with,” she said at the summit opening on 8 July. “We need to teach — especially young people who are growing up with AI right now — how to discern between performance and understanding.”
AI for human well-being
A recurring theme at the annual summit is AI’s potential to boost healthcare.
One exhibitor, Micol Spitale, Assistant Professor at Politecnico di Milano (Italy), demonstrated a pair of robots she helped develop at the University of Cambridge (UK) to support mental health. One of them, called “Nao,” can identify red-flag signs.
The robot conducts a standard questionnaire that can help patients understand mental well-being risks more clearly.
“This is a very effective tool for engaging children,” Spitale said. “It can be a good alternative to the more intimidating experience of being interviewed by a psychologist.”
A second robot, “QT,” uses facial recognition and speech analysis to boost people’s confidence and help keep their outlook positive.
“It can detect facial expressions and speech patterns,” Spitale said. “It notices if something makes you feel awkward.”
Ethicists were closely involved in developing QT, she added. Built-in safeguards keep the conversation on topic and stop if it goes off course.
Unrequited attachments
As AI becomes more capable, new ethical questions arise. Robots (like any AI platform, humanoid or not) lack human emotions – and that may well be a good trait for them.
“They’re incapable of caring about themselves. That’s very powerful,” observed Rob Knight, founder and robot hardware director at The Robot Studio.
Their detachment makes them a great asset in a humanitarian emergency. But that doesn’t stop humans from bonding with their robots.
“If a robot saves people’s lives, then those people can form an incredibly strong emotional bond with it. That’s got to be managed.”
Knight said the path from prototype to trust is paved with trial, error, and the need for oversight.
He stands next to a life-sized humanoid with an exposed skeleton, intertwined with muscle sinew and gel pouches. It’s anatomically realistic, except it’s got a battery inside instead of internal organs.

It’s got just one eye – an old-fashioned camera lens. This is far less intimidating than a pair of eyes staring at you, which triggers a fear response in mammals, including humans. But it was also a cost-saving measure at the time.
“I built this about 20 years ago,” Knight says. “This is me learning human anatomy.”
Today, he concentrates on low-cost 3D-printable machines that can do practical tasks – like make a cheese sandwich.
First-world problems?
That kind of convenience, far from being a gimmick, could yet again give the edge to first adopters, markets that hold AI patents, and people with the skills to tell the robot what to do. That leaves the unconnected third of humanity even farther behind.
Bogdan-Martin, in her opening address, underlined the danger of leaving 2.6 billion people offline while AI advances rapidly.
This digital divide prompted the ITU Secretary-General to ask: How do we ensure AI works for all of humanity?
She called for upskilling, inclusive governance, and a shared understanding of AI’s risks and benefits.
Scaling the summit
This AI for Good Global Summit is the largest since the event launched in 2017, with 15,000 registrations and 150 exhibitors, of which 100 are for robotics. The opening day welcomed decision-makers and tech innovators from government, industry players large and small, academic and research institutes, and civil society.
The summit’s scale and diversity, and unique blend of participants, underscores a central truth: the future of AI cannot be shaped in silos.
Challenging the next AI generation
Nowhere is this more evident than in the AI for Good Youth Robotics Challenge, where young innovators aged 10 to 18 program and build robots for disaster response.
National championships in 22 countries, mainly in the Global South, now culminate a World Cup in Geneva.
On 9 July, the challenge grand finale sees the team’s creations compete in tasks mirroring real-world humanitarian crises, with blocks at each table representing people, buildings and rubble. The robots – each fully autonomous – must conduct triage and successfully bring patients to hospital.
Explore the AI for Good Global Summit.
Header image credit: ITU