Until now, artificial intelligence has largely avoided questions of ethics by focusing research on interesting yet uncontroversial problems. While the technology has under-delivered, people have been unconcerned by the ethical implications of intelligent machines. Yet, the recent boom in machine learning and data science has led to intelligent technology becoming tightly integrated with our daily lives, while accompanying issues like privacy and accountability have slipped under the radar.
Companies like Google and Facebook have spent the past years accumulating a large amount of data about each and every one of us. Aside from the privacy and security issues, there is the interesting idea of how we trust these companies not to use that data for underhand purposes. As in this recent slate article: if people empathise with an intelligent machine, what happens if that machine then manipulates users? It’s not too great a leap to imagine that smartphones of the future may try to build rapport with us, then try and sell us stuff – an advertiser’s dream!
Despite anthropomorphising anything remotely human-like, we are not completely gullible. People, even small babies, know the difference between an outwardly sentient robot and another human. Yet, in limited interaction it can be very easy to fool people, and current research directions involve making interaction more natural and intelligent machines more human-like.
“You can fool all the people some of the time and some of the people all of the time, but you cannot fool all the people all the time” – Abraham Lincoln
It will be a long time before conversational interfaces are human enough to fool us even some of the time. But when they do, the implications will certainly be interesting.