AI Is Just As Biased As We Are, And The Implications Are Scary
On Monday night, Q&A hosted a special episode for the Festival of Dangerous Ideas.
Guests on the panel included a sex educator, columnist and social commentator, an expert in artificial intelligence, a pop culture critic and a self-described performer and sex clown.
As part of the panel, Artificial Intelligence Expert Toby Walsh raised some interesting questions about who or what can be held responsible for the actions of robots and AI -- particularly in their capacity as wartime weapons.
But it's also interesting to look at the same ideas in a day-to-day context.
It may seem far-fetched or even unimaginable that a machine could learn from its surroundings, form its own ideas, biases and missions and follow them through accordingly.
However, it’s already a reality. Just this year, Amazon, which is at the forefront of AI technology, had to suspend one of its automated tools built to narrow down job applicants after the company realised its algorithm had developed a gender bias against women who were potential recruits.
The program was obviously not built to be biased against women -- quite the opposite, in fact. However, algorithms learn from and adapt to the environment that they operate in and the people and corporations they operate for -- and this means that they can learn bias-- not just gender, but also race, age, and others -- they can learn any kind of bias if they are exposed to it enough.
At the time Amazon scrapped its program, the majority of its software developers were men. When that data got fed into the algorithm, it automatically ‘learnt’ that men were more suited to the job. Then, when it went through resumes to find the most suitable candidates for jobs it would score men higher, simply because it learnt to apply existing patterns.
AI can also learn from the people who are using it-- so if AI recommends five candidates again and again, and the human on the other side of the screen always employs a man from that selection, the algorithm will learn that men are preferable candidates and make its recommendations accordingly.
There is a lot of talk about AI -- with more and more physical human labour (and increasingly, human brain power) being taken over by machines, it is not unreasonable for us to pay careful consideration to the future of our economy and open our minds to new and previously unexplored industries and methods of working.
It is also reasonable to consider how robots can make the world better -- driverless cars are already being tested, and AI software like Siri and Alexa make our day-to-day lives much easier. Something as simple as Google Maps is an excellent example of how we already implicitly trust the recommendations of algorithms. (Think about it-- if AI turned against us, it could simply organise a place to redirect us when we are driving. Spooky, right?)
Artificial intelligence (for now, at least) can only be as good as the people who create it and the world it exists in. There is a big movement at the moment to use AI as a tool for ‘blind screening’ resumes, so that the most qualified candidate is chosen regardless of gender, sexuality, their physical abilities or their race.
But if we are so blind to our own biases that we can’t even rectify them with automated programs, what hope do we have to make the world a more equal place? And should we be placing all our faith in them?
I’m not saying we should give up on AI -- quite the opposite, in fact. I love a bit of progress and despite the fact that I have Siri turned off on my phone because the idea of her (him? It?) listening in constantly makes me uncomfortable, I am really excited to see the future that AI will bring us.
But we also know that AI will only be as good as the world it exists in -- so we need to lift our game.
Feature Image: Getty