Here is an audio interview with Christopher Noessel about Artificial Intelligence (AI).
What is AI? What is the digital and/or fourth revolution about? What does AI mean for me? My future? My children’s future? The future of humanity? Should I be scared, worried, excited? Will I pee my pants a little? Welcome to AI Fears.
Join us as we explore the answers to these questions with interviews, panels and discussions. Developments in AI are going to impact humanity. We think we all should be a part of it, or at the very least understand what is going on.
Henrik de Gyor: This is AI Fears. I’m Henrik de Gyor. Today I’m speaking with Christopher Noessel.
We’d like to remind listeners that these are Chris’ views and not necessarily those of his employer.
Christopher, how are you?
Christopher Noessel: I’m doing pretty good. How are you?
Henrik de Gyor: Good. Christopher, who are you and what do you do?
Christopher Noessel: There are three answers to that question. The most obvious one is that I am an employee of IBM and my job title is as the Global Design Practice Lead for the Travel and Transportation Group. That means that I consult with clients on their strategies and tactics, a lot of which involve Watson for obvious reasons. I also help with the putting apps and services out to market, that our clients can take advantage of.
And so that’s answer number one. Answer number two is that I am an author of a book in the last two months called, Designing Agentive Technologies: AI That Works For People which discusses a sub-branch of narrow artificial intelligence in which a persistent hyper-personalized AI does work on behalf of its user.
And the third answer to your question is that I am also an author and a blogger about interfaces that appear in Science Fiction, which has me looking at AI quite a bit.
Henrik de Gyor: Great. Christopher, do you think a robot will take your job?
Christopher Noessel: Yes. I think eventually it will. And I don’t mean to be flipping about that, but I can imagine that ultimately all of our jobs will be taken or take-able by a robot. And I think the question is really the time horizon and which job. And so the work that I do for software design is probably later in the imaginary timeline, but that yes, ultimately we can hand that…
If we get to general AI, and I realize there are some skeptical thoughts to be brought to that question. But if we get to general AI, then yes, software design will be easily automated that way. But if we are talking about being an author or a blogger, I think that’s probably a little farther down the road because there aren’t as many repeatable patterns for those jobs.
Henrik de Gyor: Do we need ethics in AI?
Christopher Noessel: One of the things that we need to do is to clarify about which AI we are talking about. Most of the literature makes three big distinctions in types of AI. The one that we have in the world today is called Narrow AI. And it’s so-called because, this is an AI that can do things that are very good, very sophisticated, but they are constrained to a single like domain or task.
And it’s specifically just defined as being not generalized, right? The Roomba is a kind of AI, but I can’t really ask it to help me plan a Thanksgiving meal or you know, ask it what it thinks about my thesis. It can’t generalize information from one domain to the other. It can’t learn or think that way.
The second category of AI is general AI and that’s the one that most people think about when they hear AI, if they are familiar with it through science fiction. So like BB8 or C3PO are examples of general AI. It’s human-like intelligence. They can come up with new ideas in new domains based on lessons in old domains.
And then there is the Super AI, which is sort of a result of the general artificial intelligence. When we say, “Hey, make a bunch of better copies of yourself and keep that evolution going.” Eventually, it will get to something that is so intelligent that we can’t even comprehend how intelligent it is.
So when we said we need ethics in AI, the answer is yes, but for different categories of AI, we need them in different ways, right? As we head towards the general AI, that’s the real existential threat that folks like Elon Musk are talking about, right? That if that general AI is not equipped with machine ethics, then by the time it gets to Super AI, we have an existential threat on our hands.
Having a gross optimizer that can look at our bodies and say, “Oh well, I’m supposed to maximize the number of paper clips that I put into the world and all these human bodies have iron in them. Well, I’ll just extract that iron.” Right, that’s an existential problem. So that’s a far-future problem because we don’t know how close we are to general AI. But certainly, ethics are critical there.
In the world of Narrow AI, the stuff that we’ve got now that you can actually, you know, code against or design for, then you need ethics for, not existential reasons, but certainly for other ethical reasons. For making sure that we don’t disenfranchise people. For making sure that, you know, we don’t cause harm in the things that we put out into the world.
So we do need ethics in that way. And for my money, the other question that’s really underneath that is, we are talking as person to person. But it’s, those aren’t the organizations, you know. It’s not an individual that’s going to come up with the General AI. It’s not individuals who came up with the Narrow AI that is out there in the world.
I think a more fundamental question is, how do we get corporations and governments to have the ethics necessary to incentivize the creation of an ethical AI. And that’s a trickier problem. But the short answer is yes. We need ethics at all those levels, and for all those entities.
Henrik de Gyor: Great. Christopher, based on media, books, movies, regarding robots and artificial intelligence, what is your favorite doomsday scenario?
Christopher Noessel: It’s a shocking question ’cause of course, nobody likes doom-day scenarios. It spells the horrible end of life. You’ve asked after movies and books, but I’m going to say that recently I’ve been playing a video game called Horizon Zero Dawn. And one of the conceits of the game is that there has been a global extinction event that has happened in the past as a result of glitchy AI. But the humans in the past who were sort of aware that this extinction event was coming had enough time to prepare sort of a technological response for rebirth after the extinction event.
So it was dark. It’s a very fun game. It’s got a lot of great art direction, but it’s still sort of hopefully in this dark way that says that, “Yeah, we may be heading off a cliff, but eventually we may be reborn again.”
I have one other answer to that question, which is that I have a favorite because it’s a good dire warning. And there’s a movie called Ex Machina that was really well critically received, and their writers very clearly had done a lot of their research into the domain of AI. There’s a problem in AI called the Box Problem, which is the question of, “Hey, can we put a Super Artificial Intelligence into a box and expect it to stay contained?”
And the overwhelming answer is, no. But sort of philosophically and even through a few experiments that have been run, obviously not with a real ASI [Artificial Super Intelligence] but with a human playing that role. And we see that happen sort of in the movie. And it touches on like delightful dark philosophical questions like the P-Zombie or the Philosophical Zombie. It’s just a, yeah, it’s a really rich film and has this sort of dire warning that, “No, you cannot hope to contain the superintelligence once it is out in the world.”
Henrik de Gyor: And what is your favorite bright future scenario?
Christopher Noessel: Of this, I have to turn to books. And I’m really fond of the I, Robot series by Asimov. Of course, one would hope that we have better dialog than Asimov was capable of. But the ideas there are really brilliant. He proposes that AI is generally a happens, then there’s heavy implication over the course of the short stories that the superintelligence evolves, and it has a genuine concern for human welfare.
In his case, it was because of the three, later revised to four laws of robotics. But it also recognizes, again this is some heavy implications. That humanity must change to ensure its own long-term survival. But it doesn’t want to face that change. So the superintelligence sort of fades into the background and begins to manipulate us towards like a thriving future in ways that the humans are barely aware of, right?
It’s almost like the boiling of frogs parable. But the positive version of that. The humans who live in the planet with this ASI, just keep seeing things getting better and more sustainable, but we don’t realize that we are being nudged in that direction. So that’s my favorite, I think.
Henrik de Gyor: Do you feel secure about your job today?
Christopher Noessel: Yes I do. I don’t feel, and in fact, I would say over the course of my lifetime, I expect that I will still have work both as a User Centered Software Designer. I don’t know that humans are going to stop traveling anytime soon even in the face of AI. And even as an author, I don’t feel in my lifetime, those particular jobs are going to go away.
Henrik de Gyor: So meaning well over 10 to 20 years, fair?
Christopher Noessel: Yeah exactly.
Henrik de Gyor: Okay, great.
Christopher Noessel: I’m 48 so let’s presume a retirement around 68 so, yes.
Henrik de Gyor: Christopher, we are living with a prediction that in the next decade, 50% of jobs will be removed from the job market. How do you feel about this? Do you believe it’s true? Why or why not?
Christopher Noessel: Well, I feel very concerned, of course. Because while in the past, we have had technology obviate jobs, it’s always been at a relatively slow speed such that society can either bear the burden of the unemployment, those jobs that we, sorry. The workers who we weren’t able to give new jobs or retrain, or to bear the burden of retraining them. Helping them re-assimilate somewhere else, in some other market.
But the risk of AI, and even narrow AI is bigger than that. Entire industries will be replicated fairly overnight, and there’s a massive economic incentive from every entity that’s involved in this decisions except perhaps for government that encourages them to do so.
I think I heard two years ago, that an entire investment firm in Scotland, just overnight, replaced all of its staff, the portfolio managers, with software, with AI. And that, of course is a pretty massive cause for concern. Now, that said, I am not enough of an either economics’ expert or futurist to speak with any confidence about the percentage of jobs that will be removed.
Fifty percent sound’s terrifying and we have to get ahead of that, right? We probably have enough productivity and room in the American market to handle that, if we solve the income and equality problem that we have in the States, because all that money is currently not available for the public to cope with problems like the loss of 50% of the workforce. Or replacement, let’s be precise. The replacement of 50% of the workforce.
And so we have really massive problems to address before we get there.
Henrik de Gyor: Based on your experience, what’s the biggest success and challenge of AI?
Christopher Noessel: Well, I think for this answer, I think I have to break it down between narrow artificial intelligence and general as well.
Henrik de Gyor: Sure.
Christopher Noessel: Because, the expertise that I’ve built up is with narrow artificial intelligence. And I think that for those, it’s really difficult. AI is a very nebulous term. Nobody has a real great working formal definition for it. It loosely means whatever technology can’t currently do is regarded as, “Oh, well.” But that humans can do, we think, “Well, that’s artificial intelligence.” Right?
Way back in the 1970s we would say that reading handwriting is something that, well, a computer would never do it. And now well, that’s not a trivial problem. It exists in the world. And so we don’t think of it as artificial intelligence anymore. The same thing with the Roomba, right? There is a, if you would ask a human to try and maximize its efficiency for any given floor plan and a vacuum cleaner, a human would be, you know. We can imagine that would be a very hard problem for a computer to solve. But it turns out that it’s fairy easy.
So, the successes in artificial narrow intelligence, or those things that are in our lives but we no longer think of as artificial intelligence. Think of a spam filter, right? Those things are crank and hard in the background. We don’t think of them much. But it’s a massive success that I don’t have to deal with a ton of spam in my email inbox much at all.
But that’s certainly a first world problem. And if I were more of an expert, I would hope to think of more fundamental things. I think that the challenges are going to be cultural, right? We just talked a little bit about them, and that’s massive. It’s not to be understated, right? It’s going to bring changes to our lives economically and socially. There’s a risk of disenfranchisement from those who don’t have the access technologies to the AI.
And the scariest thing is that humans don’t have a very good track record of getting ahead of the changes that technology will bring. There’s a great talk by Genevieve Bell who’s the chief anthropologist at Intel where she talks about the future prognostications of past technologies. And generally she notes, the pattern is people speak of each new, major technology as bringing in a brand new golden age or Utopia or harboring a, you know, a harbinger of darkness and doom.
Then she says, the reality for most it is pretty mundane. She likes to say, right, in the 90’s when people were beginning to talk about the internet, it was like, “Oh, we are going to break down the walls, and understand the way humans across the world live their lives and we’ll all join together as one big happy family.” And the doomsday folks were saying, “Oh wow, well on the other hand.”
And suddenly misinformation can get everywhere really fast. People can only pay attention to the things they pre-believe to be true. And she notes that, kind of in now that we live the reality of the internet, the world with the internet, the internet is mostly spent on watching cat videos. I mean, obviously that’s hyperbole but the truth is much more mundane than a Utopia or a dystopia.
So getting over that hype-cycle is going to be sort of another challenge as we think about it. But we just don’t have a really good track record of putting things in place to cope with the changes that technology brings us.
Henrik de Gyor: Christopher, what are your hopes for artificial intelligence?
Christopher Noessel: Well, narrow artificial intelligence is kind of a task for us to do. So I mean I actually dismiss that category for now. I think we just need to keep working on it and making sure that the work that we do is human-centered. That’s why I wrote that book. That’s why I’m thinking very much about my next book in a similar vein.
Though we talk about general artificial intelligence and super intelligence. My hopes for that is that it helps us overcome the worst of ourselves, right? We are biomass. The tools that we have evolved to move through the world worked really well for 80% of the time when we were moving about on a planet of sort of mud and animals and plants, and we don’t live in that world anymore. And we have a lot of old habits that we can’t divest ourselves of, that are self-destructive.
And my hope is that AGI [Artificial General Intelligence] and presuming that, that’s connected to superintelligence as well, will help us overcome those constraints and limitations. And to really thrive for the longterm, both on the planet and even in a super-far term when we have to get off the planet.
Henrik de Gyor: Christopher, when thinking of robots in AI, what are you excited or downright afraid of?
Christopher Noessel: In doing the research for my book on Agentive Technologies, I read Nick Bostrom’s book, Superintelligence, where fairly on, early in the book, he notes almost casually that it is easier for humans to write an ethic-less optimizing general intelligence. And then he goes on through the rest of the book talking about, “Oh, here are the problems we have to solve.” And then tries to end the book in a bit of hope.
But when you pay attention not to individuals, but systemically to humanity, the fact that it’s easier to do the wrong thing. The self-destructive thing is part of what keeps me up at night, right? We have to really work hard to get ahead of this problem and, you know, we’ve got a lot of other problems to solve in the near term. But this is such a big threat coming. I don’t see us as a planet getting out ahead of this fast enough.
So, that’s what’s got me afraid. Kind of excited on the flip side, because in talking about narrow artificial intelligence, I certainly think that there’s a strong connection between the narrow AI that we can create, and the way that we personalize that narrow artificial intelligence to treat us the way that we want to be treated will be a fantastic body of rules to hand to the general artificial intelligence, right?
Even if we can’t code its ethics from the top-down like Asimov tried with his fours laws, we might be able to say, “Look, just do a bit of discovery.” Take a look at all the rules and customizations that four billion people have asked of their narrow artificial intelligence, and then it should be fairly easy to infer what it is that humans like. Then we combine that with some top-down instructions like, “Hey, make sure that humans thrive. Make sure that humans thrive, have a very longterm potential on the planet. Help us overcome the worst parts of ourselves.”
I just think that, given a smart tack for narrow artificial intelligence, it’s going to set up the general intelligence, I think really well.
Henrik de Gyor: And what advice would you like to give to those fearful of losing their jobs to an AI?
Christopher Noessel: I want to say it’s easy, because general AI doesn’t exist for people to, get wrapped around the axle of this fear. So you heard me say earlier that I think that of course, most jobs are going to be taken over in time by the AI. So the question is really, when? What’s the likelihood of my job in my working lifetime to be replaced? And I put some thought to this before our call. And I think I’ve got like six questions.
The first is, is your job highly repetitive? Is most of it new and difficult or is it a matter of slugging through known problems? Because if it’s highly repetitive, there’s a higher risk that it’s going to be replaced.
A second one is to think about the output that you produce as your work. Is it physical of informational? Because of the great power, the, sorry. Because of the tools that we have currently, like the internet, like the Watson APIs, the informational jobs, I think are at greater risk than the physical jobs. Because a physical job either takes a very general robot, a generally capable robot, and which we don’t have, and even though some people are trying to make it, it’s not there yet. Or customized robots, and that takes quite a bit of investment and expertise.
So an informational repetitive job is at a higher risk. I also think if your job is quite public, like you deal with people a lot, that you are at a lower risk. If you deal with machines or words or just bits, then you are at a higher risk because there is less of a public to worry about or backlash to get. Though, that has to be tempered with the question about the stakes. If the stakes for your job are very high, then your risk is lower.
And I think the judges probably, well, if the legal system is working properly, judges are only handling unusual cases. And we all consider, in a culture with the death penalty as an option, that their job is very high stakes. We don’t want anyone to suffer unnecessarily, or families to suffer unnecessarily because of a mistake of an AI.
So if the stakes are really high, then I think the risk is very low, because humans will be loath and there will be the risk for backlash against it. And you can kind of take that as the fifth one, right? How sensitive is your job to social media backlash? And that’s of course tied into how public it is and what are the stakes.
But the last one is the questions, can a child of 12 reasonably do your job or does it take a college degree? And your gut instinct might say, “Well if a kid of 12 can do it, then it’s at a higher risk.” But there’s actually a paradox that’s been identified for a long time called Moravec’s Paradox, which says that the harder things, things that a human finds harder to do are going to be much easier for a computer to do and vice versa.
Things such as simply balancing a pencil on a finger, which a kid of 12 can do, computers find inordinately difficult. So if your job can be done reasonably well by a 12 year old with all the other caveats that we discussed, then I think your job is more safe or has less risk. So it’s some combination of those six questions I think, that can give people a real firm grasp on, not firm. It’s still a numbers game, a probability game.
I’m just saying, well how quickly will my job be replaced. But that’s just part one, right? Like don’t let the fear of it take over. Get some picture on how dire is your circumstance. Then after that, if you feel that you are at very high risk, well then there, you know, you can look for jobs that have less risk, get retrained for those. If you’ve got the time, you believe you’ve got the runway and then make the career shift towards that less risky job.
So that’s like from an individual perspective. Identify the risk and try and mitigate it as best as you can. But I also think that, as citizens, we have other responsibilities, right? Jerry Kaplan wrote in his book, Humans Need Not Apply about job mortgages. There’s also the concept of universal basic income, and these are things that aren’t getting a lot of traction politically, but that we need to be talking about in order to get ahead of the AI revolution.
So getting involved politically, supporting candidates that yes, are dealing with our current, right now problems, but also thinking about what do we do as a society, as an economy to forestall the problems of artificial intelligence, and even the creation of government agencies like France has the France Stratégie or the EU has a panel dedicated to AI both from an ethics perspective and an economics perspective.
So getting together in citizen’s groups in order to make that voice be heard, and get some real voting power behind it, I think is another thing that people can do and ought to do.
Henrik de Gyor: Well, Thanks Christopher.
Christopher Noessel: Sure. And you know I had failed, and I probably ought to do this. I had failed to give the full details of the book. If any of your listeners are designers or product managers and they are interested in this, The Concept of Agentive Tech, and the full title of my book is Designing Agentive Technologies: AI That Works For People and it’s published by Rosenfeld Media. And learn more about it at rosenfeldmedia.com.
Henrik de Gyor: And for more on this, visit aifears.com. Thanks again.
Join us regularly for topics and posts on the current state of AI.
Subscribe and participate in the discussion.