• Home
  • News
  • A call for transparent AI: ‘Computer says no’ is not...
A call for transparent AI: ‘Computer says no’ is not enough featured image

A call for transparent AI: ‘Computer says no’ is not enough

Artificial intelligence (AI) models can become so complex that we no longer understand the output. This undermines the trust of both companies and customers. AI expert Evert Haasdijk explains the importance of transparent AI.

Media coverage of AI tends to be either euphoric or alarming.

In the first variant, AI is presented as a divine technology that will solve all our problems, from curing cancer to ending global warming. In the latter, Frankenstein- or Terminator-inspired narratives depict AI as a technology that we cannot keep under control and will be out-smarting humans in ways we cannot foresee – killing our jobs, if not threatening the survival of humanity.

This paradox is caused by the fact that some AI technologies are hard to explain, says Evert Haasdijk. He is a senior manager at Deloitte and a renowned AI expert who worked as assistant professor at Vrije Universiteit Amsterdam and has more than 25 years of experience in developing AI-enabled solutions.

“Some AI technologies are pretty straightforward to explain, like semantic reasoning, planning algorithms and most optimization methods. But with other AI technologies, in particular data-driven technologies like machine learning, the relation between input and output is harder to explain. That can cause our imagination to run wild,” says Haasdijk.

Transparent AI is explainable AI

But AI – in particular machine learning – doesn’t have to be as opaque as it may seem. The ‘black box’ can be opened.

Haasdijk is a strong proponent of ‘transparent AI’: artificial intelligence-applications that allow users to understand why particular decisions have been made.

 

 

“Artificial intelligence is smart, but only in one way,” says Haasdijk. “We need humans to gauge the context in which an algorithm operates and understand the implications of the outcome. Transparent AI aims to enable humans to understand what is happening.”

Transparent AI isn’t about publishing algorithms online, says Haasdijk. “Most companies like to keep the details of their algorithms confidential. Plus, most people do not know how to make sense of algorithms. Just publishing lines of code isn’t very helpful, particularly if you do not have access to the data that is used.”

“Transparent AI is explainable AI. It should allow humans to see whether the models make sense and have been thoroughly tested, that they can understand why particular decisions are made.” — Evert Haasdijk

The point of transparent AI is that the outcome of an algorithm can be properly explained, says Haasdijk.

 

 

Detect hidden biases

There are a couple of reasons to pursue transparent AI. An obvious one is that companies want to understand the technologies they depend on. It can also help to understand mistakes, and to improve the models accordingly.

LEARN MORE: 21 January, Geneva: ITU workshop on ‘Artificial Intelligence, Machine Learning and Security’

“AI models do make mistakes – and in many instances they make fewer than humans, but still, you want to know when and why that happens”, says Haasdijk. “Take the example of the self-driving car that ran into a lady who was walking with a bike. It is essential that companies understand why mistakes like these happen, to avoid possible future accidents.”

Furthermore, transparent AI can help to explain decisions made by AI-models to customers.

Haasdijk: “Suppose a bank uses AI to assess if a customer can or cannot get a loan. If you deny a loan, he or she probably wants to know why, and what needs to be done to get the loan. ‘Computer says no’ is not seen as an acceptable answer in most cases.”

He adds that there is regulatory pressure to give customers more insights in how their data is being used, like the GDPR-rules that recently came into force.

Last but not least, transparent AI can help to detect hidden biases in data.

There are plenty of examples of this, says Haasdijk.

“Suppose you use AI to screen job applicants for potential new leaders in your company,” he says. “If the model is fed data from previous leaders who were mostly white males, the model will replicate that. Even if you don’t use race or gender explicitly in your model, it doesn’t mean these factors do not play a role in the output. Research has shown that a combination of seemingly innocent factors like height and postal code can disadvantage people of a certain background.”

Set a thief to catch a thief

So how do you create transparent AI?

First of all, there are technical steps: the technical correctness of the model and the data should be checked, and the appropriate tests should be done.

Secondly, you must make sure the outcomes are statistically sound. For instance, you can check whether certain groups are underrepresented in the outcomes, and if so, tweak the model to correct for that.

Thirdly, you can build a simpler and easier to explain AI model that approximates a more complex one. In that way, you can compare the outcome for particular cases and make sure that the more complex model makes sense. Haasdijk: “It’s like setting a thief to catch a thief – you use AI to check AI. The approximate model isn’t meant to replace the more complex one, but it can help to make it more explainable and trust the outcomes.”

Not all AI applications have to be transparent, says Haasdijk. The level of transparency depends on how much impact the technology has. “Suppose a webshop uses AI to recommend shoes you might like – it’s not that important that you understand exactly why you get to see those particular shoes,” says Haasdijk. “But with high-impact decisions, like using AI to determine which people are likely to have committed tax fraud, it is essential to be as transparent as possible.”

Related content