
Irina Mirkina, chief AI scientist at Fugro and an expert for the European Commission Research Executive Agency, speaks on AI ethics at Gitex Global 2025 in Dubai.
Luke Daniel/News24
- Generative AI is lauded as a powerful tool for processing and delivering information.
- But it can “hallucinate” and deliver inaccuracies, especially at the edge of what it considers most likely.
- Without proper oversight, those seemingly minor errors can lead to disaster, experts say.
- For more financial news, visit News24 Business.
Artificial intelligence (AI) is revolutionising workflows with the promise of superhuman accuracy and efficiency.
However, industries, especially those involved with critical infrastructure, would be wise to view these guarantees with a healthy dose of cynicism, according to one of the world’s top ethical AI strategists.
“Generative AI can hallucinate a little bit in an email, and you’ll be embarrassed, but it’s probably not going to be serious,” explains Dr Irina Mirkina, chief AI scientist at Fugro and an expert for the European Commission Research Executive Agency.
“If the hallucination gets into a technical report and then a government building is constructed based on that technical report, suddenly, when the building collapses, it becomes everybody’s problem, right?”
Speaking at the Gitex Global event in Dubai on Tuesday, on the topic of AI ethics, Mirkina emphasised the need for expert human oversight amid overpromising AI-model vendors. These “hallucinations” mentioned by Mirkina, which are relatively common within generative models, occur when the AI system generates information that is not factual or, in some cases, is entirely fabricated.
“The conversation is about the practices around accuracy [and] how we evaluate that, how we audit, how we prevent consequences from happening, and how we mitigate those risks,” says Mirkina.
“It’s not about some existential risk. It’s about making sure that the worst outcomes don’t compromise our investments in AI and in the critical infrastructure.”
Mirkina proposed a “controversial solution” to the problem of AI models that overpromise without consequence. The solution places the burden of responsibility on the vendors of these generative AI systems in the form of financial penalties for damages caused by inaccuracies.
“I think it would be great if we started introducing into every contract, into every procurement agreement that everybody signs for an AI product, penalties for damage caused by AI,” says Mirkina.
“I go to [exhibitor] stands right now, and there are promises of 100% accuracy. If all those promises are true, then you would have no risks, and you would have no problem agreeing to a 100% penalty, right?”
This solution would force AI vendors to introspect and strengthen safety checks, according to Mirkina.
“When the money starts talking, we will see a lot less absolutely fantastical promises of fully autonomous systems with PhD-level intelligence, and a lot more practical, guarded, safe implementations.”
And while Mirkina’s advocacy emphasises greater responsibility on AI vendors, it also calls for greater awareness on the part of clients. Human oversight, in the form of qualified and educated workers, will play a pivotal role in vetting data generated by AI.
This is especially important because of generative AI’s fragility in dealing with unexpected variables.
“Generative AI systems are built to predict [what’s most] likely, the top of the bell curve, the most average outcome.
“That means they’re particularly dangerous in any situation that’s out of the ordinary, with any exceptions or too many variables. Particularly in those edge cases, you need a human expert to actually recognise that.”
Although government policy and regulation can assist in creating a safer AI environment, Mirkina believes that the private sector will ultimately be left to sort these issues out on their own, while government catches up to these developments.
“A lot of it starts with awareness and… well, maybe, there are things that we don’t actually need to do with AI just yet.”
Luke Daniel is in Dubai as an invited guest of the Gitex Global 2025 conference.