Somi Arian

Future of Business, Economy, and Democracy

COVID19 and the Future of Business, Economy, and Democracy

The coronavirus pandemic and the resulting global lockdown coincides with a time in history when human biology and technology are starting to merge – an accelerating trend that began with the advent of computers followed by the digital revolution of the past few decades. Whether this coincidence is truly by chance or aided by humans may never transpire. In his book, “The Precipice”, Oxford University Professor Toby Ord notes the probability of a naturally occurring pandemic in the 21st century as 1/10,000 and an engineered pandemic as 1/30. (Ord, 2020) Incidentally, the book was published on 3rd of March 2020, just as many countries went into lockdown. 

 

Regardless of how the pandemic arose, I dedicate this article to explaining how it is accelerating technological developments that will change the face of humanity forever. That human society is on the brink of a profound transformation in this century is – arguably – inevitable. Yet, without a robust model of transition, humanity risks an immature transformation that could lead us to pay a high price.

  

Towards the Singularity 

 

Human society is heading towards an evolutionary leap where our biology and our technology will merge. The process has already begun, and we are yet to see its full scale. The merging of these two aspects of human life is the first step towards a new era where the latter will eventually replace the former. 

 

I remember first reading about this idea in a book entitled “The Singularity Is Near” by Ray Kurzweil in the late 2000s. 

 

At the time, I was writing my MPhil thesis in philosophy of science and political theory at St Andrew’s University. The idea of a singularity fascinated me. Kurzweil borrows the term from physics when it is used to describe the depth of a black hole where the laws of physics no longer apply. Kurzweil uses this term, metaphorically, to describe a point in time when humans and machines fully merge. The specific date that he gives for this is 2045, with an earlier milestone being 2029 when he expects computers to pass the Turing test. (Kurzweil, 2005) If and when this happens we can confidently say that computers possess human-level general intelligence.

 

 

According to Kurzweil, once machines achieve this milestone, they will quickly surpass humans in a recursive process of self-improvement. Unlike humans, computers do not rely on a brain confined within an enclosure, such as the human skull. He explains that once we successfully make it to 2045, with the help of AI, humans can expect a much higher life expectancy of over 120 years. They can even have a real shot at achieving immortality, albeit this immortality won’t be in our current physical form but in a virtual environment. (Kurzweil, 1999)

 


Do you think this sounds like science fiction? Think again. It doesn’t take a huge leap of imagination to envisage living in an entirely virtual environment, as many of us have done during the lockdown. Once we have a faster connection and more realistic 3D projections, we can experience being in the presence of our friends and families without them physically being there. 

 

  • While he is at it, I hope Kurzweil finds a way to make it possible to hang out with dead people, too. I would love to discuss the experience of my 21st-century existence with Nietzsche, Sartre, and Kafka over a virtual drink!

[socialpug_tweet tweet=”Over the coming years, humans will mostly live in a virtual environment. This will aid the speed of machine learning, which can lead to a technological singularity by the middle of this century –  machines merge with humans and ultimately surpass them.” display_tweet=”Over the coming years, humans will mostly live in a virtual environment. This will aid the speed of machine learning, which can lead to a technological singularity by the middle of this century –  machines merge with humans and ultimately surpass them.” style=”2″]

 

An Esoteric Writer?

 

You may be thinking that Ray Kurzweil sounds like some esoteric writer with an overactive imagination! Once you read the next sentence, I hope you will agree with me that Kurzweil is far from an imaginative futurist and begin to take this seriously. 

 

At the age of 72, Ray Kurzweil is currently a Director of Engineering at Google. What’s more, he has been consistent with his predictions for the past four decades with a high degree of accuracy. Some of the predictions that he made in his 1999 book, “The Age of Spiritual Machines”, took place with a few years’ delay but, famously, 86% of his predictions have been correct. 

 

Some of Kurzweil’s predictions are particularly relevant to the current state of society. For example, he says that physical connection among humans will decrease, to the point that most human communication will eventually be between humans and machines. Education will take place with the assistance of AI in a virtual environment and human teachers will play the role of mentors. Finally, work will mostly take place in a virtual environment and, eventually, there will be no work for humans, as machines will do everything better than us. (Kurzweil, 1999)

 

Kurzweil admits that there will be many psychological, legal and philosophical implications for these impending changes in society. When asked about how we will deal with these issues without destabilising society, he doesn’t appear to have a solid answer other than that he is optimistic. He seems to believe that we will overcome the challenges and that the upside of merging with technology is worth persevering. 

 

Are You A ‘Speciesist’? 

 

Before we return to Kurzweil, here’s a story I read in a book entitled “Life 3.0” by Professor Max Tegmark of MIT. This story had such a profound impact on me that it kept me up at night for nearly a week after I had read it. 

 

Tegmark describes an evening when he is having dinner with Elon Musk and Larry Page (the co-founder of Google) and their wives. A heated discussion arises between Musk and Page. I’m paraphrasing here; but, essentially, Page tells Musk that he is a ‘speciesist’, noting that if life is to ever expand beyond earth to the rest of the universe, it has more chance of doing so in a digital form. Page stipulates that there is nothing intrinsically more valuable in carbon-based life, as opposed to silicon-based life. (Tegmark 2017)

 

This is a HUGE deal, and it has massive implications for the human species! It is my understanding that Larry Page – one of the most influential humans who has ever lived – is saying that our human form is neither our ultimate destiny nor intrinsically more valuable than a digital form. This hit me hard! But, after a week of long walks every day trying to digest it, I came to see his point. Furthermore, in principle, I came to agree.

 

Our carbon-based life doesn’t contain an intrinsically higher value than other life forms. Whether you believe other “life” forms are possible, that’s another matter. If we look at it through the objective lens of evolution, there is nothing to say that consciousness in general, and intelligence in particular, can only arise in the physical form that humans currently occupy. There is reason to believe that both intelligence and consciousness could be substrate independent. 

 

[socialpug_tweet tweet=”If we look at it through the objective lens of evolution, there is nothing to say that consciousness in general, and intelligence in particular, can only arise in the physical form that humans currently occupy.” display_tweet=”If we look at it through the objective lens of evolution, there is nothing to say that consciousness in general, and intelligence in particular, can only arise in the physical form that humans currently occupy.” style=”1″]

 

Transparency and the Problem of Free Will 

 

As much as I’m open to technological advancements, I feel uncomfortable with the possibility of how these technologies could affect our free will. I think we will see a radical shift in the notion of democracy in this century and the recent experience of lockdowns has given us a taste of this. 

 

If and when the moment arrives for humans to take the evolutionary leap into a new state of “being”, a new “form” if you will, we should be able to do so of our own free will and with a complete understanding of what that means. With every transformation comes a certain level of pain. Some precious experiences will be lost forever, in return for new ones. I believe we have the right to be made aware of these challenges and given the opportunity to deal with them. I fear that we may never get this opportunity. 

 

In “The Age of Spiritual Machines”, Kurzweil predicts that, overall, society will not resist the impact of technology. (Kurzweil 1999) And this seems true when we look at how we have adopted social media, our smart devices, and various means of digital communication. Moreover, it’s noteworthy how well everyone has cooperated with governments and technology firms during the recent lockdowns, knowing that our smart devices are being used to track our movement. 

 

Urgent Action Required!

 

As we increasingly live virtual lives, we face major challenges. If we are connected to “the cloud”, how can we be sure that the choices we make are ours? As I write this in April 2020, the ubiquitous connection to the cloud has not yet reached full swing but the Coronavirus incident has accelerated its arrival. For example, Apple and Google are joining forces to share data and technology firms are becoming rather friendly with our governments. 

 

I love technology, but I love my freedom too. I fear that with no adults in the room, some people could cheat. This is an urgent matter as we have only a small window of opportunity before we are fully connected to the cloud. We need to put some checks and balances in place before that happens.

 

As we become increasingly connected to the cloud, our data is used to train the machine learning algorithms of technology firms. Unfortunately, I feel there is not enough transparency from these firms to explain to people how everything they do on the internet contributes to the fuel that these algorithms need to become more sophisticated. 

 

In the meantime, the tech giants are becoming increasingly aligned with governments and this new alignment could enable corruption on a level never before experienced. Imagine the Cambridge Analytica scenario. In a world where the tech companies and governments are fully affiliated, our governments could easily turn a blind eye to the misuse of our data.

 

 

Living at this point in time in the 21st century means that we are experiencing many new paradigms that we have no references for. Our legal and ethical frameworks are, for the most part, completely inadequate to deal with a new era of human-machine relations. Just as we have International Law for dealing with the relationship between nation-states, we need new bodies of organisations dedicated to regulating the relationship between humans and machines and to clarify who owns the data generated as a result of their interaction. Let me give you an example. 

 

Wearable Technologies

 

As a tech philosopher, health-tech investor and an overall technology enthusiast, I’m the kind of person who uses wearable technologies on a daily basis. For example, I wear a ring that tells me minute details about my sleep patterns, my body temperature, heart rate variability etc. I love my ring, and I haven’t stopped talking about it since I started using it. 

 

In the past few weeks, I’ve noticed a message coming up on the app associated with the ring which says the ring’s manufacturer is now collaborating with another organisation to establish if the data collected by the ring can contribute to recognising symptoms of COVID19. The message invites users to join the study which means that, once you agree, your data will be used to enrich the machine learning algorithms currently trying to decipher how the virus affects the body and spreads to the environment. 

 

In principle, I don’t see a problem with this. However, it raises an important issue that I remain conflicted about on an ethical level and in terms of its overall impact on increasing inequality in society. If you are not sure what I’m talking about, bear with me. I will explain how this example also applies to Google, Apple, Facebook, Amazon, WeChat, Tiktok, Alibaba, and numerous other technology firms that are growing with incredible speed – thanks to Artificial Intelligence in general, and machine learning in particular. 

 

In Search of the Master Algorithm

 

Let me start this section with a basic lesson in computer science, that is the difference between traditional computer programming and the much newer field of machine learning – and please forgive me if this sounds too obvious or trivial to you. Pedro Domingos explains this best in his book, Master Algorithm, where he comments that you can think of machine learning as the inverse of programming. (Domingos 2015) 

 

I can’t express how important this is. It is this intricate fact that creates massive implications for the world of business, economy and the sociopolitical landscape in a way that we’ve never before encountered. 

In traditional programming, an individual or a company writes a code that tells the computer what to do – for example, to calculate your taxes. In principle, this is no different from creating a physical machine with numerous cogs whereby each cog triggers another until a final result is achieved. In this model, the inventor deservedly remains the “owner” of the intellectual property. If you invent the wood-chopping machine, you deserve to own the patent and make a lot of money from it. Likewise, if you write a computer program that calculates people’s taxes, well done. You deserve to become rich. 

 

For many years this was the only way we thought of computers – as agents that did what we said. Incidentally, Ada Lovelace, a protégé of Charles Babbage (the father of computing) and a woman who many believe to have been the first-ever computer programmer, once clearly stated that she believed the machine would never be able to originate anything – a statement that didn’t sit right with Alan Turing when he read it many years later. 

Now, what we have learned about traditional computer programming does not apply to machine learning, and this is where the conflict arises regarding two crucial points:

  1. Who truly owns the intellectual property?
  2. How do you calculate the value of the labour that creates the data?

 

Who’s Got My Money?

 

The two essential problems of an economic model built on machine learning are the lack of clarity regarding intellectual property and data ownership, as well as a general sense of disregard for the time and effort that goes into producing the data.

 

In machine learning, the individual or company that creates the algorithm is merely the initiator of a process. Once the process begins, the algorithm finds a life of its own by learning from the data, as long as you keep feeding it. As Domingos puts it, “people can write many programs that computers can’t learn. But surprisingly computers can learn programs that people can’t write.” (Domingos 2015) For this type of machine learning to happen, the system will require a huge amount of data. 

This raises the question that, if an algorithm that you created goes on to learn by itself and become millions of times more intelligent than what you set out for, can you still say that you own the algorithm?

Furthermore, this becomes even more questionable when we consider that the data that enriches the algorithms is a result of the trillions of hours spent by billions of humans across the globe as they take pictures, make videos, write articles and create content for the web. Even those who don’t post anything unknowingly train the algorithms just by browsing online. Most people are unaware of the fact that, merely by their presence on the web, they are enriching machine learning algorithms and, as a result, their creators. 

 

You Are Rich, You Just Didn’t Know It

 

In that sense, machine learning is somewhat like mining and refining oil. Imagine you have a well in your backyard which is rich with oil. You are not alone either. Your neighbours have these wells too, and so do the rest of the people in your town. But none of you knows how to extract the oil, refine it and use it.

 

Now, imagine a clever entrepreneur builds a machine that can extract and refine the oil in your backyard. But their machine will only work if – and only if – you pour A LOT of oil into it. The entrepreneur then has to convince you and everyone else to give him the oil to refine. In return, he offers you something that has a far lower value than the sum of all the oil he will extract from everyone’s wells.

 

The oil in our backyards is the data that we all leave behind when we are connected to the internet. Machine learning is the process of collecting and refining this data and producing predictions on its basis. Machine learning won’t work without lots and lots of data which we are collectively providing. 

 

Therefore, can we say that the person who writes the program has a right over the intellectual property associated with this algorithm or even the entire ecosystem? I don’t believe so! At best, they can claim only partial ownership of it and, since it takes a lot of people’s data to learn from, it’s hard to judge exactly what share of the profits or other intangible benefits should go to a platform’s users and how much of it belongs to its originator. 

 

Going back to the example of my smart ring, I bought it for my own personal use and paid the company a premium for a product that’s supposed to help monitor my health. But there comes a point where my data, along with that of tens of thousands of others, can be used for a purpose other than the one initially intended. That increases the company’s value in the market on the back of the data contributed by its users. 

This is an inherent flaw in the digital economy which I also talk about in my upcoming book on the future of work and one that not many people seem to think about until it may well be too late.

 

Think of companies like Facebook, Google, Apple, Amazon, Netflix and other tech giants that have produced an immense level of wealth for their founders and shareholders. Without the thousands of hours that we all spend on their platforms, their algorithms would never have reached the level of sophistication that they have. Every time we post something on social media, even our holiday pictures, every time we write an email, watch a tutorial or even enjoy Netflix, we are training their algorithms. 

[socialpug_tweet tweet=”The Digital Economy is a Winner-Take-All model, where the originator of an algorithm draws disproportionate profits from the intellectual property of the masses who contribute their data simply by being connected to the internet and living their lives.” display_tweet=”The Digital Economy is a Winner-Take-All model, where the originator of an algorithm draws disproportionate profits from the intellectual property of the masses who contribute their data simply by being connected to the internet and living their lives.” style=”2″]

 

An Attempt at Self Regulation 

 

I say all this as an active investor in technology start-ups. I’m fully aware of the lack of a suitable model as to how tech companies and their investors should share their returns with their users. The models that we currently have are based on a pre-digital economy and need to be radically reformed. This is something that I’m actively thinking about and researching. 

 

In the digital economy, and especially one that’s built upon automation and machine learning, the tech companies’ commitment to their users does not end by giving them free emails and cheap entertainment, or even a product such as a smartphone. I’m officially an Apple fan and I have six Macs, two iPads, iPhones and an Apple Watch. I speak to Siri 20 times a day as it helps me in my daily life, like a member of the family. I am constantly aware that, with every move I make and every time I talk to Siri, I’m training it. As millions of us do this every day, these algorithms are speeding up their learning process. 

 

Sometimes I wonder what if they learn everything that there is to know about human behaviour and we no longer have anything else to teach them? At that point, would they even have a use for humans? As our technologies become more sophisticated, our very identity as humans is coming under question.

 

A Social Subclass?

 

Ray Kurzweil predicts that, once technology supersedes us on every level, we will see a subclass of humans in society who will be given their basic life needs to exist. Of course, Kurzweil is not the only person who has predicted this. Professors Yuval Noah Harari, Nick Bostrom, Erik Brynjolfsson and many others have warned against some version of this. The reason I’ve repeatedly quoted Kurzweil is that he is not just another academic who can be accused of theorising. He is a Director of Engineering at Google.  

 

I fear that this subclass in society might involve a much larger proportion of the population than we think, possibly over 90%. This means middle classes could be wiped out, potentially by the end of this decade. 

What no one seems to talk about is the fact that the very technologies that machine learning algorithms are being built upon are fuelled by the data produced by that same 90% of society. Without their data, tech giants wouldn’t be where they are. So, 90% of society needs protection and help to cope with these transitions. By protection, I don’t just mean a basic income, I mean opportunities to thrive, compete, build businesses of their own and feel fulfilled.

 

[socialpug_tweet tweet=”Aided by Artificial Intelligence, The Digital Economy can wipe out the middle classes, rendering 90% of the society as a “subclass”, with little to offer that machines can’t do better, and faster.  Yet, the very technologies that machine learning algorithms are being built upon are fuelled by the data produced by that same 90%. ” display_tweet=”Aided by Artificial Intelligence, The Digital Economy can wipe out the middle classes, rendering 90% of the society as a “subclass”, with little to offer that machines can’t do better, and faster.  Yet, the very technologies that machine learning algorithms are being built upon are fuelled by the data produced by that same 90%. ” style=”2″]

Containing The Masses

 

Once we are all connected to the cloud using our wearable and implantable devices, we will no longer be able to tell whether any sense of happiness or fulfilment we might feel arises from our own independent agency. I have heard Kurzweil mention an example in several interviews where a young woman undergoing brain surgery while awake experienced humour and laughed when a certain region of the brain was stimulated. Once we all have an implant in our bodies, how can we be sure that our laughter will not be like that of this young woman?

 

The question of whether that’s necessarily a bad thing is a separate matter. You could argue that modern entertainment hypnotises society in a similar fashion. But we need to have a choice, and that’s what I’m talking about. While some people spend their weekend watching repeats of the Kardashians, others spend it in pursuit of knowledge and understanding. Both have the right to choose their own experiences. We have come a long way from slavery and serfdom to our modern-day democracy. 

 

Humans are born to be free and in their freedom, they have the potential to create infinite and diverse forms. Artificial intelligence does not have to be our last invention!

 

[socialpug_tweet tweet=”Experiments show that stimulating a part of the brain can induce laughter. Once we are all connected to the cloud using our wearable and implantables, how can we know if our sense of happiness or fulfilment arises from our own independent agency?” display_tweet=”Experiments show that stimulating a part of the brain can induce laughter. Once we are all connected to the cloud using our wearable and implantables, how can we know if our sense of happiness or fulfilment arises from our own independent agency?” style=”2″]

 

The World After COVID-19

 

One crucial issue that I have not yet mentioned in this article is the emergence over the last few years of what we might call a cold war between China and America. The COVID-19 pandemic has contributed to their posturing over who will become the world’s technological superpower. 

 

In his book, AI Superpowers, China, Silicon Valley and New World Order, Kai-Fu Lee explains that in China, the government and their version of Silicon Valley are essentially the same. The Chinese government is pouring billions of dollars into Artificial Intelligence and biotechnology. As a nondemocratic government, they have direct access to the data mined and refined by their technology firms. Bearing in mind China’s massive population, that is a lot of data and that alone gives them an advantage over the west. Remember, data is oil for machine learning.

 

Kai-Fu Lee points out that there is one major challenge that could hold the US and its European allies back in their technological rivalry with China. That challenge is a lack of alignment between western technology firms and their democratic governments. Well, it’s safe to say that the COVID-19 episode has now paved the way for such an alignment. (Lee 2018)

 

A recent Economist article warns against the damage that this alignment could cause to voters, consumers and investors. Soon, that ship will have sailed. Once our data has been shared, this process can’t be reversed. It’s like sharing a secret, you can’t take it back. (The Economist, 2020)

 

When you think about the recent shenanigans over who will be the first to bring 5G to the market, you get a glimpse of how serious this is. If you are wondering why 5G is a big deal, it’s because it will enable the Internet of Things, (IoT), which means a much faster and ever-increasing amount of data to enable machine learning. 

 

Imagine how much my ring and my smartwatch know about me and my movements, now imagine them sharing that data with every other machine connected to the cloud in real-time. Humans can’t even begin to fathom the idea of processing so much data whereas machines do it easily and technology firms and governments will have ready access to it. Is this a world we all want to live in? 

 

I hope this article has convinced you of the seriousness of the challenges that lie ahead of us. The topics that I have discussed here are no longer in the realm of science fiction. They are here. Now. And we have a small window of opportunity to form a robust transition architecture as we merge with our technologies. 

 

I close this article with an open letter to Ray Kurzweil at Google. 

 

An Open Letter to Ray Kurzweil

 

Dear Ray,

 

I have read your book, “How To Be A Danielle”, and I believe I’m someone that you would call a Danielle. The transformations that you have talked about over the past 40 years and your predictions for the future of humanity appear to be inevitable and make logical sense. But we don’t have to go into them blindly. I, for one, want to be conscious, present and to fully experience it as we enter this new phase of our evolution.

 

As a female immigrant living in exile, there is one thing I know for sure that you have probably never experienced. That is, once you transition to the other side there is no going back. 

 

When I look back at my life, I don’t regret for a second that I left my birth country and came to the west with no family or connections. I put myself through education and built a new life. However, it’s not been an easy journey and, at times, it took its toll on my mental and physical health. In retrospect, I wish I had been better equipped with the tools, knowledge, and most importantly, the wisdom to help me to achieve a smoother transition. 

 

I worry that humanity may be setting foot into new territory with no map and no robust theory of transition. I appreciate that you are an optimist but we need to spend as much time, energy and resources as possible on ensuring that our transition will benefit everyone in society. These are the very people whose data is making it possible for our algorithms to learn and self-improve. These people need protecting, not least for the legacy that they are making possible. Having a society with a large population of “subclass” humans is not acceptable. 

 

I have read all your books and watched all your interviews. I know your technical visions for the future. What’s missing is a solid construct for how we will be able to address the ethical, philosophical and psychological implications of these technological advancements.  

 

For this reason, I would like to request a live interview to discuss these issues with you and seek your input to ensure we put processes in place to protect the humans living on earth as we go through a transition from the pre-digital to the post-digital era. 

 

Somi Arian