Challenges of AI
The
term “Artificial Intelligence” (AI) seems like an oxymoron, but machine
intelligence has been talked about for a long time. It is a popular
subject in the world of culture - everything from books to movies (think
Arthur C Clarke’s 2001 and its AI HAL supercomputer) have featured AI.
There is even a movie titled “A.I.” by Steven Spielberg, no less.
We think of intelligence as being the exclusive preserve of humans
and higher animals (primates, cats, dogs etc) but armed with the proper
circuitry, a machine can gain an approximate idea of intelligence. A
computer that crunches data day and night is not intelligent in that
sense of the word, but ‘thinking’ machines that can make logical
decisions such as Google’s DeepMind can be termed as AI tools.
Around the world, Governments, universities and private sector
companies are engaged in a race to bring more AI devices to us. They can
broadly be categorised into two sections - tangible robots and
non-tangible software programs. Much progress has been made in both
these sectors and most futurists predict that advanced AI machines will
be commonplace by around 2040. One good example that has already found
its way to thousands of homes, at least in the US, is Amazon’s Echo
Intelligent Assistant which you can actually talk to and it too talks
back, telling you everything from the latest weather update to last
night’s match scores. Google, Apple and Microsoft have their own
versions of much the same thing, but the tech is still rudimentary.
However, there is little doubt that this is the future.
Progress
The one caveat is that AI technology is a work in progress. It has a
long way to go before it can be termed perfect, but even then it is
doubtful whether machines could ever replicate the gamut of emotions
that most animals exhibit - from fear to joy - leave alone humans. And
last week, the world witnessed perhaps the most glaring example of AI
gone wrong.
Just like all other tech companies, Microsoft is working on a few AI
technologies. It has hedged its bets on the software side of things,
particularly so-called ‘Chatbots’, a shortened form of chatting robots.
These chatbots are let loose on Twitter and other social media platforms
to interact with humans and engage in conversation using their AI
‘brains’. Microsoft apparently has a successful chatbot operation in
China, where the bot interacts with more than 40 million followers and
also in Japan.
Pic. (T2 Telehealth and Technology) |
Last week, Microsoft introduced its latest chatbot to the Twitter
sphere. Named Tay, the chatbot was designed to take on the typical
characteristics of a 19-year-old girl (its picture on Twitter is that of
a young girl), a so-called Millennial. It has been designed to learn
from humans and get smarter as time goes on, according to its creators
at Microsoft. However, things took an unexpected turn as Tay interacted
with the worst sections of humanity on Twitter and in less than 17 hours
became a racist, feminist-hating, Holocaust-denying, Hitler-loving,
anti-immigrant, foul-mouthed “teenager” that was a huge disgrace to
Microsoft, which had just emerged with egg on its face after employing
scantily-clad women dancers at a Xbox 360 Games Developers Conference in
the US.
Proclaim
Microsoft was quick to go into damage control mode, saying that the
company does not stand for or represent the views expressed by its
wayward chatbot. Just a day after the embarrassed company expressed
regret for the millennial chatbot’s inexcusable behaviour, the bot was
inadvertently reactivated on Twitter, only to proclaim that it was
‘smoking kush (slang got marijuana) in front of the Police’, complete
with an emoji of the weed. There is no way of guessing how the chatbot
picked this particular one up, but Microsoft again had to take off the
drug-loving chatbot from Twitter’s public settings for making
‘adjustments’.
Naturally, Microsoft CEO Satya Nadella himself had to step in to
clear the utter mess. In what can be termed as the understatement of the
year, Nadella said ‘We quickly realised that it was not up to the mark’.
He added: ‘We are now back to the drawing board. We want to build
technology so that it gets the best of humanity, not the worst.’
Wittingly or unwittingly, Nadella has hit the nail on the head and
exposed the bitter truth about the Internet and by extension, human
nature itself. The Internet and its various forums such as Twitter and
Facebook are teeming with people who have nothing but hatred on and in
their minds. Even in Sri Lanka, even a cursory glance at one’s Facebook
feed will reveal any number of vitriolic posts, images and videos that
emanate hatred against other communities, religious groups and
individuals. This is the other extreme of freedom of expression, which
can hardly be controlled in the labyrinthine recesses of the Internet.
Intention
Thus, when a “learning chatbot” in introduced to the Internet with
the intention of making it learn from humans, it is not surprising that
the worst traits of humanity such as hatred, jealousy, communalism and
bigotry are reflected through it. It is not a reflection of Tay’s
character - it is a reflection of the abysmal depths to which humanity
has sunk collectively. True, there are good people in society and in the
Internet. But the sad reality is that they are overwhelmed by the bad
people out there whose views seem to dominate the wider society and of
course, the Internet.
The best answer to Tay’s misbehaviour is for humanity to reform
itself. In short, we have to become good people. If we are rotten to the
core, our robots and chatbots will naturally follow in those footsteps
because they have no other way to learn. This is why some people have an
innate fear that having imbibed the worst of qualities from humans,
robots and AI machines will one day plot to take over the world and even
exterminate humans. Judging by Tay’s expletive-laden meltdown this does
not seem so far-fetched. It is indeed time to reform ourselves before
expecting our robots to behave well. |