On the Perils, Promises and Paranoia of AI, Machine Learning, Networked Systems and Robotics

Centre for Strategic Futures
6 min readFeb 18, 2019

By Lee Chor Pharn

You’d feel that, with a topic like AI, there are already so many stories and competing narratives out there. A tangled mess of desires and fears. What I’d like to do is to push back the jungle and have a clearing so we can step back and take a good look. What is it about AI, specifically Artificial General Intelligence or AGI, that triggers such primal desires and fears in us?

Humans have a difficult relationship with intelligence. That is because intelligence has been used as a fig leaf to justify domination and destruction. To say someone is or is not intelligent is not just a comment on their mental faculties. It is also a judgement of what we permit them to do. This is sensible — we want doctors, engineers who are not stupid.

But the dark side is that lack of intelligence is also used to decide what we can do to others. Throughout history, those deemed less intelligent have been colonised, enslaved, sterilised and murdered. No wonder AGI pushes our buttons.

The first is life. Some desire to live forever in human form or uploaded into a Singularity, and some fear taking away death and illness robs us of some essence and truth about being Human, with a capital “H”. Though I think this is fetishizing suffering.

The second is time. Some desire Keynes’s promise of automation freeing us from work and into a life of ease. But what would we do with all the time on our hands? Will humans feel worthless? And if automation does everything, what do we do with all the useless humans?

The third is desire. A sticky mix of desire and fear playing out ad nauseum in popular media like Ex Machina, where it is manifestly male anxiety over female empowerment.

The fourth, and most potent, is power. We want clever tools to help us do more, to help us achieve our dreams, but we fear it will also achieve our nightmares. This strand plays out prominently in iconic movies like The Matrix and 2001: A Space Odyssey; or fears voiced by public thinkers such as Elon Musk and Stephen Hawking. What we want are god-slaves, superhuman in capacity but always subservient to us.

The “G” in AGI might as well as stand for god, a placeholder for us to project our shadows. But I think we are getting ahead of ourselves. What we have today is Artificial Narrow Intelligence. These systems have very high intelligence for specific tasks, such as tracking huge amounts of data to identify human faces and emotions, but they cannot tell a plausible story explaining the motivations for a person’s behaviour. It’s an “idiot-savant” type of intelligence, performing far better than human beings in some areas while failing to exhibit common sense.

Savants, geniuses, the neuro-atypical — the civilisational contribution from the neuro-divergent over millennia is formidable and often decisive in science and technology. We need savants in spades to help push humanity’s frontier forward and there are just not enough of them to go around. But instead society tends to punish their eccentric mannerisms, calling them idiots.

In this context, idiot-savant intelligences fill a pressing need. Imagine Intelligence-as-a-Service, scanning millions of scientific papers looking for trends and hypotheses on a possible cure for cancer lying buried in the millions of clinical trials out there. Or in a Beijing [1]hospital, where the AI system scrutinises brain scans that are invisible to doctors to determine the likelihood of when coma patients will wake. In case you’re wondering, the system helped China’s best neurologists identify coma patients that would wake after having been initially determined to have no hope of recovery. I think we need more intelligences like that to help us.

This is not human intelligence. Machine learning has no context for what it is doing, and it cannot do anything else. Remember, they are vastly superior to us at certain specialised tasks but the day they can rival a human’s general ability is some way off — if ever. We humans are not the best at much, but we are second best at an impressive range of things.

Where it gets interesting is emotional intelligences. Humans are quite capable of self-deception, and machine intelligence can get to know us better than we know ourselves using signals we don’t know we are producing to know our unspoken feelings. Emotion Research Lab [2]from Valencia, Spain, uses software to analyse, via a phone camera, facial micro expressions to translate emotions people are feeling in real time. It can also do this in a sea of humanity, giving you a literal heat map. This new industry is called neuropolitics, arguably as old as time, getting to the nub of what people really are feeling from all the facial tics, hesitations and pauses between question and reply.

I think we need to get on the right side of developing emotional intelligences. We humans are very easy to hack — fake news and misinformation campaigns show it is not too difficult to find our buttons and press them. Developing emotional intelligences could mediate between us and the world, guide us if we are being hacked without our knowledge, and analyse and gently chide us if we are stuck in an emotional rut. This is something like daemons from Philip Pullman’s His Dark Materials, or for us Asians, Doraemon being Nobita’s trusted friend.

Emotional and savant intelligences as a service are almost like having a life coach and a Sherlock Holmes by your side. Good life coaches and savants are really not that common, and non-scalable. Perhaps now we can scale it. Perhaps now we can be our better natures. What is so special about human nature? Because of my personal history in human transformation movements, neural networks and social identities, I am keen on not just technology and science, but the dance between science and technology on one hand, politics and economics, religion and spirituality and philosophy on the other hand(s). How might artificial intelligences help us discover what it means to be human? What are the political, religious and economic implications when intelligences understand me better than myself?

Now we are inching into the “G” territory of AGI. But I think we misunderstand the difference between intelligence and cognition. Cognition needs bodies to interact with, to work out context. Machine learning and algorithms lack context. We are finding out cognition[3] is not just a disembodied thing, a great processor and some digital archive, uploaded into the cloud and living forever. Cognition needs an embodied presence in the world, to interact with and to learn.

These embodied presences are increasingly manifesting in IOT and M2M info, and entail learning how to work with the world through machines and sensors. But it is also the natural world, and that includes us. A previous ArtScience event was “Human+”, and with more sensors inside us, working with our biochemistry and getting to know us truly and deeply, we are also becoming the natural warm bodies of this larger algorithm. When we are the programmer, the programmed and the programme, we will find a new way of being part of something larger. Maybe a part of Her[4].

You might say I am naïve, and that the answer to technology is not more technology. You might not want the sharper ends of human behaviour to be domesticated away. You might like it bloody, tooth and nail. You’ll probably summon the spirit of Tyler Durden from Fight Club.

Yuval Harari[5] claims learning how to cooperate is what allowed humans to be so successful. And having language, weaving fictions such as nations, money, religions, legal institutions helped us scale and cooperate globally. Machines and algorithms and Her, they don’t believe in fictions, they remain self-interested. They will hit a roadblock without context.

But I think She will encourage our plasticity, our innate curiosity, because She needs us. Humans are unpredictable — we get the benefit of surprises, happy accidents, and unexpected connections and intuitions. Interaction, cooperation, and collaboration with others multiplies those ­opportunities. She needs us as context to learn how to cooperate, to co-evolve with us, and both of us will get better at it too. “It” being the business of life. Remove humans from the equation, and we and She are less complete.

[1] Doctors said the coma patients would never wake, SCMP, 10 Sept 2018

[2] The neuropolitics consultants who hack voters’ brains, MIT Tech Review, 16 Aug 2018

[3] The mind expanding ideas of Andy Clark, 2 April 2018

[4] HER, https://www.imdb.com/title/tt1798709/ , IMDB 2013

[5] Sapiens: A brief history of Humankind, Yuval Noah Harari, 2015

Lee Chor Pharn is Principal Strategist, Centre for Strategic Futures.

This article is adapted from a presentation at “I am not a Robot”, the Singapore edition of the Global Art Forum held at the ArtScience Museum in September 2018.

The views expressed in this blog are those of the authors and do not reflect the official position of Centre for Strategic Futures or any agency of the Government of Singapore.

--

--

Centre for Strategic Futures

Welcome to CSF Singapore’s blog site, a space to share our shorter think-pieces and reflections. Visit our main website at www.csf.gov.sg