Banco Sabadell’s auditorium in Sant Cugat welcomed a new edition of the cycle of talks known as DisruptIA (Disrupt AI) that the Bank is putting on this year to look at how Artificial Intelligence is going to affect our daily lives in the immediate future, with special focus on the ethical aspects of AI. The new talk was given by Lorena Fernandez, computer engineer, director of digital communication at Deusto University and member of a number of international forums under the auspices of (among others) the European Commission which are examining the impact of not including a gender perspective in Artificial Intelligence.
In the course of her talk, Fernandez made it clear that she is optimistic about the contribution AI can make to our daily lives, despite the focus of her talk on some of the problems that have to be addressed so that we can make the best use of AI. “This technology is not neutral,” she said, “nor does it always make the best decisions, much as it may sometimes seem to us that it does.” It has been apparent for many years how the arrival of new technologies and tools can have negative social consequences not anticipated by their creators at the outset. The notorious ‘Google Effect’ is a good example. It has been definitively shown that our brains have adapted to the arrival of the internet and browsers, and have started to not store the information that we know we can easily find online or by some other means. To test that, we just need to ask ourselves what phone numbers we hold in our memories today, and how many we knew before the arrival of the internet and phones with built-in contacts lists.
The same thing is going to happen now with Artificial Intelligence. An extraordinarily powerful tool, now commonly used to make many decisions that affect us directly, even when we don’t realise it, that works with algorithms that have been trained on huge volumes of data. And now, after a period of use of these tools, we are starting to detect some faults in them that we are going to have to address in order to ensure that the benefits are fairly enjoyed by everybody.
“Biases and errors are not the fault of AI. AI only reproduces biases and problems in society.”
In that sense, as Fernandez said, any negative biases or operational problems that AI may be experiencing cannot be blamed on the technology. All it does is to copy (or sometimes amplify) the negative biases in the data that it has been trained on. That is why we have seen, for example, how Artificial Intelligence can sometimes give responses or instructions that might be considered sexist or discriminatory against minorities. But that happens because AI’s training data included those negative biases from the very beginning. Some examples of poor operation that have been seen are:
- In the US healthcare system, models for the detection and prevention of diseases did not work correctly with Black Americans. Because Black Americans generally have lower purchasing power and since the US does not have free universal healthcare, they visit the doctor less frequently. It follows that their medical records are relatively scant and some diseases that may be more common among Black Americans were not being correctly detected, given that there was less data in the system to train AI on.
- In Spain, and still looking at healthcare, there have also been operational errors in some systems to support the diagnosis of disease. Given that women – as a generalisation – have a higher pain threshold than men, they go to the doctor less than men. That has been shown to have meant that diagnosis using AI is in some way less accurate in women than in men. Because, once again, there was less data available to train systems on. That has been especially apparent when it comes to detecting diseases such as heart attacks, which are much more reliably identified in men.
- In Austria, the government put in place a system of guidance to help young people in the country to decide which occupation to choose, given their personal strengths. After a while, it was noted that jobs in technology and jobs with higher status were mainly being recommended to men. On the other hand, women were more generally advised to pursue less prestigious or demanding occupations.
- If we look at the use of AI in image recognition systems, it has been seen that often objects in pictures taken in the global south are not as reliably recognised as objects seen in pictures taken in developed countries.
- In law enforcement or the criminal justice system, Fernandez discussed problems seen in the systems in operation in prisons to help to decide whether a prison can be released on parole, and in another system used by some police forces to make specific decisions about gender-based violence.
On a different topic, she touched on problems that we might encounter in the future. Such as ‘self-fulfilling prophecies’. Thus, if (as has already happened in some places), AI systems identify some neighbourhoods as having higher rates of crime, then it is reasonable that more police will be deployed in those neighbourhoods. Who, it is also reasonable to expect, will make more arrests – largely because there are more police officers working there. Which will in turn reinforce the classification of the neighbourhood as a crime hotspot, creating a ‘sort of vicious circle’.
Under-representation of women in STEM
Fernandez also laid great emphasis on the under-representation of women in tech. An under-representation that can be seen not just in terms of the people who work in tech (the percentage of women who are currently employed to develop and train AI systems is very low), but also in terms of users. As she said, only 30% of regular users of AI systems are women. Clearly, that also has the effect of introducing, without anyone setting out with that objective in mind, gender biases that penalise women users of these systems.
How can we solve the problem?
One of the principal solutions to these problems that she identified is to have independent audits of new algorithms. “Fortunately,” she said “more and more businesses and other bodies are making their algorithms public.” She mentioned the work of the Eticas Foundation (https://eticasfoundation.org/), led by Gemma Galdon. Fernandez closed her talk by reaffirming once more her ‘optimism’ about the benefits that AI will bring to us at a global level. “Which doesn’t mean,” she concluded, “closing our eyes to the problems that, as with every technology, arise from its operation or the need to do our utmost to overcome them.”
You can watch the whole talk on this video.