One day you’ll prefer virtual reality to reality. Here’s why.

The virtual you

It’s a Saturday morning in 2038. You wake in anticipation. No work today. Soon you’ll be somebody else. The star of your own fantasy. Robin Hood in Sherwood Forest. Or strutting the catwalk in Milan. A medieval samurai appeals. But you eventually settle on rock god. The big hair. The thrill of the stage. The adoring crowd before you. You slide into that virtual suit in the lounge room and slip away…

Around twenty years from now, many of us will prefer to live in the virtual world. And even when we’re present in reality, we’ll seek support from AI rather than from loved ones. There’s nothing sinister in this prediction. I doubt the future will be matrix-like, where the machines keep us drugged up and plugged in. We’ll pass on reality because we want to.

This outcome is almost inevitable because machines will soon become much better at empathising with us than even the most emotionally intelligent person. With the ability to make millions of calculations per second, the algorithms governing future versions of Alexa and Google Home will judge immediately whether you’re stressed, happy, sick, aroused, depressed or bored – and adapt responses to tell you exactly what you want to hear.

Think of the comparison in boxing terms. In one corner stands your AI assistant, which can precisely analyse the internal (e.g. heart-rate) and external (e.g. hand and eye movements) indications of your emotional state. In formulating its response, it draws upon millions of data points based on your past preferences and behaviour. In fact, it knows you better than anyone; better than you know yourself.

In the other corner, sits your friend or family member. They try hard to listen to your troubles. But they’re easily bored, straying constantly into other thoughts. They’re prone to tiredness, hunger and bathroom breaks. And they have very little ability to adapt their responses to changes in your tone or body language.

What rational person would prefer the advice of a human over the near-perfect counsel of a machine? AI wins every time. Like many people, I’m uncomfortable with this conclusion. I don’t like the thought of getting through life on the support and advice of a silicon chip.

But the reality is that algorithms might already know us better than we know ourselves. In 2015, researchers at Cambridge and Stanford created an algorithm that judged the personality of 17,000 Facebook users better than their friends and family, using nothing other than one personality survey and access to their Facebook likes.

If that’s where AI is today, imagine its capabilities two decades from now. Ray Kurzweil, Director of Engineering at Google, estimates the 21st Century will achieve a thousand times the progress of the 20th Century. If that sounds ridiculous, it’s only because our brains make predictions about future events based on what happened in the past.

This makes us terrible at comprehending the future power of AI, because we assume machines learn in the same linear way as we do. Take studying the guitar. Over twenty years, a human might achieve mastery by making incremental gains over each passing year.

By contrast, a self-learning algorithm can teach itself how to play every song ever written in a few hours. This is called exponential learning, or the law of accelerating returns. And it means the rate at which AI can improve itself is simply mind-bending.

 

Even today, I bet you can’t walk for 30 minutes without checking your phone. Do you really think you’ll be strong enough to resist the allure of a virtual paradise, tailored to meet your every desire? Will you really be willing to ignore your AI assistant’s superior advice on work, life and relationships, when doing so will lead you to make bad choices and put you at a competitive disadvantage to everyone else?

The early signs of this brave new world are already upon us. The Netflix algorithm is better at guessing what tv shows you’ll like than you are. And the machine learning underpinning Google Home can now help arrange your appointments, play your favourite music and organise your shopping list.

Technology will also intersect with how societies and cultures function. In speculating on the future, for example, I’m interested in how the interaction between human beings and AI could develop differently within different civilisations. In particular, will the Western culture be more impacted by AI’s ascendency than other traditions?

I think so, because we have more to ‘lose’. The West is the only major intellectual tradition that prioritises individual liberty over communitarian ideals like stability and social harmony. And individualism is fatally undermined when we outsource decision-making to highly intelligent algorithms.

Harari poses a related question in his book Homo Deus: “What’s the point of having democratic elections when the algorithms know not only how each person is going to vote, but also the underlying neurological reasons why one person votes Democrat while another votes Republican?”

A majority of people from the Chinese, Hindu, Islamic and Russian traditions, by contrast, might be less concerned. Their civilisations are built on varying notions of social, national or religious identity and control. In all cases, the stability of the community is more important than the rights of an individual.

Maybe these other civilisations were quicker to understand something the Western tradition has long denied, and what science is increasingly revealing to be true: free will is an illusion, so best not to place it on a pedestal.

As someone who venerates enlightenment thinking, individual liberty and human rights, it’s a troubling conclusion. But one I’ve reluctantly had to accept as I learn more about why AI will probably succeed in uncoupling intelligence from consciousness. Maybe I shouldn’t worry. One day I’ll be plugged in anyway. And being a rock god ain’t half bad.


By Luke Heilbuth, Head of Strategy