Recently we saw a gathering of the great and the good to a swanky tech conference at Savoy Place in London. Up for discussion was the role of Artificial Intelligence (AI). Then, with an audience of academics, developers and tech professionals, someone in the audience asked the question: “Where’s the intelligence?”.
Some at the conference may have squirmed in embarrassment, but for us it was a genuine and insightful question. Is AI really intelligent? We have become fuelled by science fiction, futurists and media reporting that somehow, we are arriving at some Skynet, Terminator scenario with thinking machines.
Reality is a little more mundane, though perhaps still a little worrying.
Stripped down, AI is simply a bunch of algorithms that act on data fed by human programmers. But one thing that is certain is that AI is not neutral. In fact it is very far from it; every input and process is steeped in bias and interpretation, whether good or bad.
This was the theme at the Turing talk given at the conference by eminent German professor, Dr Krishna Gummadi from the University of Saarland. In it, he outlined how easily bias can creep into AI projects.
One example given was a project in the US to assist American judges to determine appropriate prison sentencing and the awarding of parole. Called ‘Predictive Policing’ the project was supposed to apply data profiling to interpret justice records to predict the likelihood of recidivism or rehabilitation.
Unfortunately, the AI recommendations led to the judges handing harsher sentences to black and minority-ethnic people. When this issue became clear, programmers tried to eradicate this bias, but the AI stubbornly kept discriminating in favour of white defendants.
Here in the UK, Durham Constabulary found itself criticised last year for using AI algorithms which were designed to help make custody decisions but were found to discriminate against poorer people. The program was developed to assess the risk posed by offenders and ensure only the ‘most suitable’ were granted bail.
Likewise, AI is found to be inherently sexist particularly when it comes to AI such as Google translate. For instance, translate certain terms, such as nurse, into French and you will always get the feminine word.
This, in many respects is not the fault of the AI. Instead it is the peculiarities of languages and cultural norms. The algorithm can only relate what people input and that includes biases. But nevertheless, if AI were truly intelligent then, surely, it should figure these out itself?
Can AI be programmed with emotional intelligence to determine for itself what is right and what is wrong. According to Dr Gummadi the answer is no: “AI can’t have emotional intelligence, but it can be taught ethics though.”
Experts are now arguing that as AI becomes more widely used and more sophisticated, society needs to maintain a critical perspective on moral and ethical grounds. Not least because the inherent assumptions that are made are, for now at least, all too human.