BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Max Tegmark Hopes To Save Us From AI's Worst Case Scenarios

Following
This article is more than 5 years old.

Artificial intelligence (AI) is one of the hottest trends pursued by the private sector, academics, and government institutions. The promise of AI is to make our lives better: to have an electronic brain to complement our own, to take over menial tasks so that we can focus on higher value activities, to allow us to make better decisions in our personal and professional lives. There is also a darker side to AI that many fear. What happens when bad actors leverage AI for bad uses? How will we ensure that AI is not a wedge to divide the haves and have-nots further apart? Moreover, what happens when our jobs are fundamentally changed or go away when we derive so much of what defines us from what we do professionally?

Max Tegmark has studied these issues intimately from his perch as a professor at MIT and as the  co-founder of the Future of Life Institute. He has synthesized his own thoughts into a powerful book called Life 3.0: Being Human in the Age of Artificial Intelligence. As the title suggests, AI will redefine what it means to be human due to the scale of the changes it will bring about. 

Tegmark likes the analogy of the automobile to make the case for what is necessary for AI to be beneficial for humanity. He notes that the three things that are necessary are that it have an engine (the power to create value), it needs steering (so that it can be moved toward good rather than evil ends), and it must have direction or a roadmap for how to get to the beneficial destination. He notes that "the way to create a good future with technology is to continuously win the wisdom race. As technology grows more powerful, the wisdom in which we manage it must keep up." He describes all of this and more in this interview. 

(To listen to an unabridged podcast version of this interview, please click this linkThis is the 31st interview in the Tech Influencers series. To listen to past interviews with the likes of former Mexican President Vicente Fox, Sal Khan, Sebastian Thrun, Steve Case, Craig Newmark, Stewart Butterfield, and Meg Whitman, please visit this link. To read future articles in this series, please follow me on on Twitter @PeterAHigh.)

Peter High: Congratulations on your book, Life 3.0: Being Human in the Age of Artificial Intelligence. When and where did your interest in the topic of Artificial Intelligence [AI] begin?

Credit: MIT

Max Tegmark: I was extremely curious as a kid. I remember lying in my hammock between two apple trees as a teenager and thinking that the two greatest mysteries were the universe out there and the universe in here, in our mind. I spent 25 years of my career studying the former, and in recent years, I have become more fascinated by the latter. I am increasingly interested in the science of intelligence, and I have shifted my MIT research group to work on AI. In parallel with my day job, I have spent many nights and weekends brainstorming how we can ensure that AI’s growing impact on society is going to be beneficial, rather than harmful.

High: When you have described your efforts to figure out where AI might take us, you make an analogy to driving a car. First, you need the engine and the power to make AI work. Second, you need steering because AI must be steered in one direction or another. Lastly, there needs to be a destination. Can you elaborate on each of those topics, and could you give us your hypothesis as to where we are heading?

Tegmark: If you are building a rocket or a car, it would be nuts to exclusively focus on the engine’s power while ignoring how to steer it. Even if you have the steering sorted out, you are going to have trouble if you are unable to determine where you are trying to go with it. Unfortunately, I believe this is what we are doing as we continue to build more powerful technology, especially with AI. To be as ambitious as possible, we need to think about all three elements, which are the power, the steering, and the destination of the technology.

Because it is so important, I spend a great deal of time at MIT focused on steering. Along with Jaan Tallinn and several other colleagues, I co-founded the Future of Life Institute, which [focuses on] the destination. While we are making AI more powerful, it is critical to know what type of society we are aspiring to create with this technology. If society accomplishes the original goal of AI research, which is to make so-called "Artificial General Intelligence” [AGI] that can do all jobs better than humans, we have to determine what it will mean to be a human in the future. I am convinced that if we succeed, it will either be the best or the worst advancement ever, and it will come down to the amount of planning we do now. If we have no clue about where we want to go, it is unlikely that we are going to end up in an ideal situation. However, if we plan accordingly and steer technology in the right direction, we can create an inspiring future that will allow humanity to flourish in a way that we have never seen before.

I believe this to be true because the reason that today’s society is better than the Stone Age is because of technology. Everything I love about civilization is the product of intelligence. Technology is the reason why the life expectancy is no longer 32 years. If we can take this further and amplify our intelligence with AI, we have the potential to solve humanity’s greatest challenges. These technologies can help us cure other diseases that we are currently told are incurable because we have not been smart enough to solve them. Further, technology can lift everybody out of poverty, solve the issues in our climate, and allow us to go in inspiring directions that we have not even thought of yet. It is clear that there is an enormous upside if we get this right, and that is why I am incredibly motivated to work on that.

High: I am struck by the caveman analogy. We are so far removed from cavemen and cavewomen that a modern human and caveman would not be able to recognize each other in terms of life expectancy, the ability to communicate, and the time we have to reflect and ponder our situation, among other differences.

Tegmark: That is so true, and you said something super interesting there. While we are so far removed, we are largely stuck in the caveman mindset. When we were cavemen, the most powerful technology we had were rocks and sticks, which limited our ancestors’ ability to cause significant damage. While there were always cavemen that wanted to harm as many people as possible, there was only so much damage one could do with a rock and a stick.

Unfortunately, with nuclear weapons, the damage can be devastating, and as technology gets more powerful, it becomes easier to mess up. However, at the same time, we now have more power to use technology for good. Because of both of these factors, the more powerful the technology gets, the more important the steering becomes. Technology is neither good nor evil, so when people ask me if I am for AI or against AI, I ask them if they are for fire or against fire. Fire can obviously be used to keep your house warm in the winter, or it can be used for arson. To keep this under control, we have put a great deal of effort into the steering of fire. We have fire extinguishers and fire departments, and we created ways to punish people who use fire in ways that are not appropriate.

We have to step out of our caveman mindset. The way to create a good future with technology is to continuously win the wisdom race. As technology grows more powerful, the wisdom in which we manage it must keep up. This was true with fire and with the automobile engine, and I believe we were successful in those missions. While we continuously messed up, we learned from our mistakes and invented the seat belt, the airbag, traffic lights, and laws against speeding. Ever since we were cavemen, we have been able to stay ahead in the wisdom race by learning from our mistakes. However, as technology gets more powerful, the margin for error is evaporating, and one mistake in the future may be one too many. We obviously do not want to have an “accidental” nuclear war with Russia and just brush it off as a mistake that we can learn from and be more careful of the next time. It is far more effective to be proactive and plan ahead, rather than reactive. I believe we need to implement this mindset before we build technology that can do everything better than us.

High: You mentioned there are some attributes that we still share with our distant ancestors. Even if AGI does not come for decades, the change will be almost the same in magnitude as the change from cavemen to the present day. For example, it potentially has the power to change the way in which we work. You have written persuasively about the possibility of what we do being taken over by AI. In a society where many of us are defined by the work that we do, it is quite unsettling to know that, what I love about my day job today will be done better by AI. We may need to redefine ourselves as a result. What are your perspectives on that?

Tegmark: I agree with that, and I would take it a step further and say that the jump from today to AGI is a bigger one than the jump from cavemen to the present day. When we were cavemen, we were the smartest species on the planet, and we still are today. With AGI, we will not be, which is a huge game changer. While we have doubled our life expectancy and seen new technologies emerge, we are still stuck on this tiny planet, and the majority of people still die within a century. However, if we can build AGI, the opportunities will be limitless.

People are not realizing this, and because we are still stuck in this caveman mindset, we continue to think that it will take us thousands of years to find a way to live 200, or even 1,000 years. Moreover, the mindset that we have to invent all the technologies ourselves has led us to believe that it will take thousands of years to move to another solar system. However, this is far from true because, by definition, AGI has the ability to do all jobs better than us, including jobs that can invent better AI among other technologies. This capability has led many to believe that AGI could be the last invention that we need to make. We may end up with a future where life on Earth and beyond flourishes for billions of years, not just for the next election cycle. This could all start on Earth if we can solve intelligence and use it to go in amazing directions. If we get this right, the upside will be far more significant than the benefits we reaped going from cavemen to the present day.

Regarding what it means to be a human if all jobs can be done better by machines, that is why the subtitle of my book is, Being Human in the Age of Artificial Intelligence. Jobs do not just give us an income, they give us meaning and a sense of purpose in our lives. Even if we can produce all that we need with machines and figure out how to share the wealth, it does not solve the question of how that purpose and meaning will be replaced. This crucial dilemma absolutely cannot be left to tech nerds such as myself because AI programmers are not world experts on what makes humans happy. We need to broaden this conversation to get everyone on board and discuss what type of future we want to create. This is essential, and unfortunately, I do not believe that we are going about this the right way.

Students often walk into my office asking for career advice, and in response, I always start by asking them about where they want to be in the future. If all the student can say is that they may get cancer, be murdered, or run over by a truck, that is a terrible strategy for career planning. I want these people to come in with fire in their eyes and say, "This is where I want to be." From there, we can figure out what the challenges are and come up with a strong strategy to avoid them so that they can get to where they want to be. While we should take this same approach as a species, it is not the one we are taking. Every time I go to the movies and see something about the future, it showcases one dystopia after another. This approach makes us paranoid, and it divides us in the same way that fear always has. It is crucial for us to have a conversation around the type of futures we are excited about. I am not talking about getting 10 percent richer or curing a minor disease, but I want people to think big. If machines can do everything with technology, what kind of future would fire us up? What type of society do we want to live in? What would your typical day look like? If we can articulate a shared, positive vision that catches on around the world, I believe we have a real chance of getting there.

High: What happens if AGI gets to the point where the work that you are doing at MIT and at the Future of Life Institute is no longer meaningful?

Tegmark: That is a hard-hitting question. I get an incredible amount of joy from figuring stuff out, and if I could just press a button and the computer would write my papers for me, would it be as much fun? This is not an easy topic.

In my book, I discuss twelve different futures that people can choose between. Just because we can think about a future that we are convinced is perfect, does not mean that we should do nothing. At a minimum, we should do the necessary thinking that will allow us to steer our future in the right direction. There are some obvious decisions that need to be made now, such as how income inequality will be handled. While we may be able to dramatically grow the overall world GDP, we must be able to share this economic pie so that everybody is better off. As more and more jobs get replaced by machines, incomes that have typically been paid in salaries will go towards whoever owns the machines. This concept is why Facebook, a high-tech company, is twelve times more valuable than Ford, despite the fact that it has eight times fewer employees. Unfortunately, we have not begun to make these decisions, and if we are unable to do so to the point where everyone benefits, then shame on us. As companies become more high-tech, we must make twists to the system to avoid leaving more people behind and ending up with far more income inequality. If this problem does not get solved, we will end up with more and more angry people, which will make democracy more and more unworkable. However, on the bright side, all that wealth makes this problem relatively easy to fix. All that needs to be done is to bring in enough tax revenue so that everyone can be better off.

The second aspect, which I believe is a no-brainer, is that we must ensure that we avoid a damaging arms race with the lethal autonomous weapons. Fortunately, nearly all the research in AI is going towards helping people in various ways, and most AI researchers want to keep it that way. Around the time I was born, we were on the cusp of a horrible arms race with bioweapons. When this happened, the biologists pushed hard to get an international ban on bioweapons, and as a result, most people cannot remember the last time they read about a bioweapon terrorist attack in the newspaper. If you ask a hundred random people on the street about their opinions on biology, they are all going to smile and associate it with new cures, rather than with bioweapons. It is critical that we handle AI weapons in a similar way.

We need to put a greater focus on the steering aspect of AI. Nearly all of the funding going into AI has been around making it more powerful, and little is going towards AI safety research. Even increasing this a little bit will make an impactful difference. As we put AI in charge of more infrastructure-level decisions, we must transform buggy and hackable computers into robust AI systems that can be trusted. If we fail to do so, all these fascinating new technologies can malfunction, harm us, or be hacked and used against us.

As AI becomes more and more capable, we have to work on the value alignment problems of AI. The real threat with AGI is not that it is going to turn evil in the way that it does in the silly Hollywood movies. Instead, the largest threat would be if it turns extremely competent. This is because the competent goals may not be aligned with our goals either because it is controlled by someone who does not share our goals, or because the machine itself has power over us. We must solve some tough technical challenges in order to neutralize this threat. We have to figure out how to make machines understand our goals, adopt our goals, and then keep these goals if they get smarter. Although work has begun in this area, these problems are hard, and it may take roughly 30 years to solve them. It is absolutely critical that we focus on this problem now so that we have the answers by the time we need them. We have to stop looking at these issues as an afterthought.  

High: What role do private sector, academic, and governmental institutions play? Each is exerting influence in their own ways, and they are progressing at different rates. How do you see that balance?

Tegmark: Academia is great for developing solutions to AI safety problems while making them publicly available so that everyone in the world can use them. You want safety solutions to be free because if someone owns the IP on them, it will cause a worse outcome.

I believe private companies have mostly played a constructive role in helping encourage the safety work around AI. For example, most of the big players in AI, such as Google, IBM, Microsoft, Facebook, and many international companies, have joined together in an AI partnership to encourage safety development.

On the flip side, governments need to step it up and provide more funding for the safety research. No government should fund nuclear reactor research without funding reactor safety research. Similarly, no country should fund computer science research without putting a decent slice towards the steering part.

That is my wish list as to what we should focus on in the current day to maximize the chances of this going well. In parallel, everyone else needs to ask themselves what future they want to see. They should remember that the next time they vote and whenever they exert influence, we want to create a future for everybody.

High: How do you keep up with the progress or lack thereof of these advances?

Tegmark: Both through the research taking place at MIT and through the nerdy AI conferences that I go to. Additionally, the non-profit work that I have been doing has been fascinating. I have spent a great deal of time speaking with top researchers and CEOs who are making incredible progress on this. I am encouraged, and I find that the leaders are mostly an idealistic bunch. I do not believe that they are doing this exclusively for the money. Instead, they want this technology to represent an opportunity to create a better future. We need to make sure that the society at large shares this goal of channeling AI for good, instead of using it to hack elections and create new ways to murder people anonymously. That would be an incredibly sad result of all these good intentions.

Peter High is President of Metis Strategy, a business and IT advisory firm. His latest book is Implementing World Class IT Strategy. He is also the author of World Class IT: Why Businesses Succeed When IT Triumphs. Peter moderates the Forum on World Class IT podcast series. He speaks at conferences around the world. Follow him on Twitter @PeterAHigh.