Here is an audio interview with Mark Waser about Artificial Intelligence (AI).
What is AI? What is the digital and or fourth revolution about? What does AI mean for me? My future? My children’s future? The future of humanity? Should I be scared? Worried? Excited? Will I pee my pants a little? Welcome to AI Fears. Join us as we explore the answers to these questions with interviews, panels and discussions. Developments in AI are going to impact humanity. We think we all should be a part of it or at the very least, understand what is going on.
Henrik de Gyor: This is AI Fears. Henrik de Gyor. Today I’m speaking with Mark Waser. Mark, how are you?
Mark Waser: Doing well, and yourself?
Henrik de Gyor: Great. Mark, who are you and what do you do?
Mark Waser: As you said, I’m Mark Waser. I have two hats, actually in terms of AI fears. I am one of the researchers at the think tank, the Digital Wisdom Institute. I’m also the Chief Technical Officer of our commercial wave.
Henrik de Gyor: Mark, think a robot will take your job?
Mark Waser: No. I don’t think a robot will take my job. I feel that robots in the next 15 to 20 years would be capable of it. But then again, there’s always the question of why they would want to take a job and whether they could necessarily do a better job than I would.
Henrik de Gyor: Do we need ethics in AI?
Mark Waser: Absolutely. The real problem though is not ethics in AI, in terms of we need robots that have ethics or else they’ll kill us. As much as it’s a problem that our society as a whole needs more ethics. AI is a technology like any other. Unfortunately, over the past two millennia, it’s become easier and easier for fewer and fewer people to cause more and more destruction with less and less effort.
AI really is most dangerous because what it enables human beings to do rather than the standard fear of what the robots could conceivably do. Human beings have evolved killer instincts. I really don’t believe that robots are as much of a problem as other human beings.
Henrik de Gyor: Based on media, books and movies regarding robots and artificial intelligence, what’s your favorite doomsday scenario?
Mark Waser: There’s certainly some cognitive disconnect there. Having a favorite doomsday scenario, I guess I would answer, I’d want the one where doomsday doesn’t come to pass. In terms of that type of fiction that I enjoy, I am particularly fond of Fred Saberhagen’s old Berserker series back when I was a kid. Basically, in that case, some alien race had been wiped out by the machines that they had created, warring either among themselves or with some other species. Basically, the universe was now threatened by the machines that were left. It was a very good series in terms of both the writing and the adventure.
But also in the fact that it raised many interesting view points and issues, for instance, the fact that eventually the machines would actually create some living entities who theoretically they should destroy, but basically since they were helping the machines destroy life, they were actually good life rather than bad life. Very interesting how the dynamics of cooperation and community mean that hatred and annihilation doesn’t always win.
Henrik de Gyor: What is your favorite bright future scenario from the media?
Mark Waser: In terms of reading again, I’m very partial to Isaac Asimov’s various robotics books. As an AI researcher, I’m very aware of the fact that his three laws of robotics were much more designed as a plot experiment. And also to hang stories on. But even so, he’s an excellent author and generally, his stories were a lot of fun.
Henrik de Gyor: Do you feel secure about your job today?
Mark Waser: Job security is an interesting question when you’re in my field. I, as a software architect and developer, always strive to actually work myself out of a job. For me, the best job security is to get the current job done, do an excellent job and then be recruited for the next job. In that respect, I feel very secure currently.
The one danger in the future of course is whether or not it would be possible that a machine would do the job better than I would do the job. That’s actually something that I tend not to worry about too much. The way that we currently envision artificial intelligence to come about is in a form that’s very similar to the various biological forms and humanity in particular. What I mean by that is that it’s most likely that there will be a core generalist competency of the intelligence and that core will actually be most powerful because it’s using a wide variety of cognitive tools. There’s really no reason why human beings can’t use those same cognitive tools equally well.
A lot of people are afraid of the so called intelligence explosion because they foresee that change, once they can program themselves will be able to ever improve themselves and there’s really no stopping that. On the other hand, if you look at the average teenager currently, once he’s equipped with YouTube and the internet, he can do amazing things that couldn’t have been done 100 years ago. I think our future is likely to be very much the same. I think that the tools that conceivably could be used by machines are equally likely to be able to be used by humans. And therefore, it isn’t likely that we’re going to be majorly outclassed.
Henrik de Gyor: How about in two years?
Mark Waser: In the next two years and also in the next five years, I really don’t see human level intelligence and my job is one that really require at least human level intelligence. In the longer term, in 10 years or in 20 years, imagine that we will be reaching the point where machines could do the type of job I’m doing. Or where they could competently perform. I’m just not sure that they would necessarily have a huge competitive advantage.
Henrik de Gyor: Mark, we are living with the prediction that within the next decade 50% of jobs will be removed from the job markets. How do you feel about this? Do you believe it’s true? Why or why not?
Mark Waser: There’s definitely a large amount of churn in the job market. This is not something that’s particularly new or unusual. If you look back a century, virtually all the jobs were agricultural. Now the percentage of agricultural jobs is in the order of 2 to 3%. What’s different is that with the accelerating pace of technology, the changes are happening much faster and they’re much more disruptive.
There’s also the problem that we seem to be just as we’re hollowing out the middle class. We seem to be hollowing out a lot of the middle tier jobs. I think that there will be many problems with the disruption that’s going to take place over the next 10 years. Even though we won’t have any sort of human level AI at that point. The automation of cars and trucks is going to put a tremendous number of people out of business. There’s already a tremendous amount of automation in terms of customer service. In terms of checkout lines. It now looks as if the fast food employment opportunities are going to be rapidly diminishing.
Those factors combined with the fact that it seems as if there aren’t enough jobs to go around, due to the fact that people are following austerity and don’t want to spend money, my fear is, is that there are going to be a lot of people out of work and we haven’t made the adequate societal adjustments in order to handle this problem.
Henrik de Gyor: Based on your experience, what is the biggest success and challenge with AI?
Mark Waser: AI started in the mid 1950s. And while there were occasional bright spots, it really didn’t do anything that affected humanity as a whole for pretty much the first 50 years. The last decade or so has been very, very different. We’re now succeeding in terms of language recognition. We’re succeeding in terms of pulling information together. We’re succeeding in automating many things that long been questionable whether we would actually succeed in automating them. It’s becoming pretty clear that we’re actually going to be able to produce artificially intelligent entities in probably a decade or so. I would call all of those major successes.
Henrik de Gyor: And as far as challenges?
Mark Waser: Challenges. My concern for the biggest challenges are how do we use AI and how do we prevent it from being misused? I tend to lump together both artificial intelligence and the other cognitive sciences like big data, machine learning and things of that sort. We’re already running into a substantial number of problems in terms of how these cognitive sciences are being used to manipulate people and dramatically alter society in an unfavorable direction. We’ve got Facebook that’s doing massive experiments on people and even worse, we’ve got things like Cambridge Analytica where they’re it looks like they’re actually successfully manipulating elections.
Henrik de Gyor: What are your hope for artificial intelligence?
Mark Waser: My dream view of AI has always been that we create friends and allies. People who are different than us but who would like to get along with us. Diversity is a wonderful thing. We can develop partners that can go where we can’t go or that have certain skills that we don’t have. And in return we ought to be able to trade off the skills that we do have. Realistically, there’s no reason why we shouldn’t be able to create a virtual utopia. Problem is, is that we, as a society have no clear vision of how to go about doing that.
Henrik de Gyor: Mark, when thinking about robots and AI, what are you excited or downright afraid of?
Mark Waser: Downright afraid of the direction that we seem to be going in. As a society, we don’t seem to be pulling together. There’s ever increasing income inequality. There are ever increasing ways in which to rig the game, if you have access to resources and particular cognitive resources.
My real fear is that humanity is going to do itself in with the assistance of its technology. I’m not quite sure how we’re going to avoid that. As I said, that’s my fear. If we somehow manage to get past that, I think it’s actually going to be awesome. I mean you hear currently about first world problems where stressful and we’re unhappy about them. But they’re the awesome type of problem that we rather have. Somehow we manage to pull ourselves back from the brink, I think that all we’re going to have will be first world problems that we can then bitch about.
Henrik de Gyor: What are you excited about?
Mark Waser: As I said, if we can turn the corner, one of the things is that AI tools should be able to be developed that help us reason more effectively, that help us debate for effectively. That help educate us better. There’s tremendous opportunities for enhancing the human condition and improving the way in which we do things. I’m very excited, if those things take place. It’s just my fear that unfortunately it looks as if we’re currently headed in the other direction.
Henrik de Gyor: And what advice do you have for those fearful of losing their jobs to an AI?
Mark Waser: The human edge currently is flexibility. If you’re flexible in what you enjoy doing. If you’re flexible in what you can do, you’ll obviously be in a much better position in terms of protecting your job both from humans and AI. Other thing that I would recommend is that anything that we can do to start encouraging other forms of economics is desperately needed at this point. Side jobs are a wonderful idea. One of my children is actually doing very, very well as a writer currently, even though he has a college education as a biologist and works at a museum.
Another thing that I’m very hopeful about is the idea of universal basic income becomes a reality. We really have already reached the point where there are far more humans than are necessary to perform the necessary jobs for at least the maintenance of human beings that we have. Yet we’ve not focused on that aspect. Unfortunately there are a lot of jobs out there that actually impede success, that cause more friction than actually cause useful things to happen for humans. If we could actually get rid of those jobs and use artificial intelligence to free people up and yet make it so that they still can make a living, life will be wonderful.
Henrik de Gyor: Thanks Mark.
Mark Waser: Thank you for having me.
Henrik de Gyor: For more on this, check out aifears.com thanks again.
Join us regularly for topics and posts on the current state of AI. Subscribe and participate in the discussion.