Well, first off let me say this.
This post took me some time to write up. Mainly, because the more I researched about the topic of “Automation Bias” the more I became concerned…very concerned…almost a Hunter S. Thompson level of “Fear and Loathing”. To the point where I called up my co-host for this site Henrik and told him I don’t know how to write about this topic because – as the kids say -“its got me shook” and I will probably be rather biased.
After some cajoling and deep breathing exercises – he reminded me of our mission statement:
We here at AIFears are always trying to dispel current misconceptions and “fears” about artificial intelligence, robots and the coming information revolution with research, facts, and clear thinking.
Also, that this was my turn to write a post:(
We here at aifears talk a great deal about the current state of artificial intelligence and the concerns it generates. For the most part, these fears are born out of the news, misconceptions and Hollywood exploitation (robosploitation, lol). However, there are issues that concern us when it comes to the pace of AI and its development.
One such concern is what I am referring to as ethical apathy* a lovely blend of both algorithm and automation bias. Like a meritage – can be expensive, life enriching, chic and has the potential to ruin your day.
Let’s start with: Automation bias
We generally trust that the program, system or AI is correct. Much like I do with Grammarly when I write anything. I’m not thinking twice when it shows me a red underline and suggests a revision. I take that shit. I’m not worried about my syntax anymore, hell the prose is my own – just cleaned up a notch so I don’t sound like more of a rambling lunatic. Yeah?
This is what is happening with algorithms right now. Microsofts misfortune with Tay is a good example of an extreme instance that is easy to detect. But what if that error is subtle or enrobed in ethical delicacies? (Grammarly chose that word- enrobed – I mashed out something on the keyboard that just started with the letter “e”, see…you see?)
An excerpt from:
Skitka, L. J., Mosier, K. L., & Burdick, M. (2000). Accountability and automation bias. International Journal of Human-Computer Studies, 52, 701 – 717.
whereas errors of commission proved to be the result of a combination of a failure to take into account information and a belief in the superior judgement of
So we have our inherent trust in these systems because – hey they use algorithms and that’s math. That superior judgement is based on math. So it cant be biased – math is math. Right?
Yeah…but no. Let’s take a look at what math we are talking about.
In her book Weapons of Math Destruction – Cathy O’Neil brings to light some serious concerns about the impact of these algorithms. Which if you haven’t downloaded the audiobook right now, go do it…I’m serious I’ll wait.
(couple days later…)
See what I mean? Whoa right! Oh, you didn’t get the book yet – oh you’re GOING to get it…ok fine I’ll give you an absurdly short takeaway. (honestly, go read/dl this book)
Algorithms are used to help determine human life impacting factors such as loan application, employment and even bail decisions. Hmm kinda scary, no? Why? Because algorithms like this are already being used throughout the country for decisions that impact lives. How does the algorithm determine these results? What is the basis for the data being used?
None of these questions can be answered, due to the fact that the companies that sell these solutions are not forced to tell us what or how it works. That is all proprietary information that they will not disclose as it goes against their IP. Eeesh, right?
Heres a case point from Hanna Sassaman’s piece in Newsweek.
If I was arraigned today, in dozens of cities and states across the U.S., a “risk assessment algorithm” might have suggested what the magistrate’s decision should be. Would I show up for court if released? Would I be arrested again? Trained on thousands of criminal records and weighing anywhere from a handful to dozens of factors, the computer would have spit out a recommendation for the judge to consider—set a bail, send her home with conditions, or release on her own recognizance.
Moreover, these systems do not determine results on an individual basis, if you are deemed unqualified of any of these things – there is no appeal. Sorry, the machine here says you don’t qualify for that job or credit for a home loan – It doesn’t care what you may say to change that. The algorithm does not consider you a unique snowflake – despite what your 1st grade teacher says.
Lets take a look at Algorithmic Bias.
This is what is at the heart of the dilemma. We are asking coders to write programs to determine ethical problems for us. Are they experts in ethics? Are they familiar with the intricacies of civil rights? Are they taking into consideration all the parameters that make an effective solution? Do they understand the nuances if the solutions their algorithm is offering? Has it been tested in real simulations? Do they drink too much Red Bull?
Apathy and laziness. Its a program, we say, its got to be correct. I don’t have time to test this – I’m not even qualified to test if programs are calculating results in an ethical fashion- it is not my job. The AI just makes my job easier. Also, it’s already been thoroughly tested otherwise it wouldn’t be used, right? Right?
Think about that. If the algorithm to determine your bail (which is mentioned in Sassaman’s article) determines you are a flight risk and your bail is set – would you like to know how it came to that judgment? Well, you can’t that’s proprietary information that the developers jealously guard as IP.
Dystopian hallucinations of a society blinded into an apathetic direction by monitors that give absent minded instruction on actual human lives in horrifyingly impactful manner. Tragic comedy. Are we seeing the seeds of the banal directorship of our algorithmic overlords?
Ok, let me stop before this post becomes a pop culture poetic lament on society.
There needs to be an ethical hand guiding this development as we eagerly deploy AI cost-saving measure as put by Brandon Purcell in his Forrester.com blog post:
We are at a pivotal moment as a species. We can either use AI for good or allow it to cement and reinforce past inequity. If we are lazy, it will do just that. But if we are thoughtful and vigilant, AI can have a positive impact on all people. At least, that is my hope.
Well Brandon – I’m with you on that. Fingers crossed and all – but do we leave it to the developers or the companies making ridiculous profits from that?
The major players in the AI/ML space claim that this is a course correction that will need to be made. Fair enough. The technology is still new, but my question is that – since these systems have the potential to impact millions of human lives should we wait until something occurs before that course correction is implemented. I don’t feel good about that, since when googling videos of these topics and looking at the number of views on Youtube. Sheesh – seems like there is little interest in being proactive. Also, wheres the profit in being ethically responsible – when I dont see a revenue stream for these companies my faith in their “thoughtful vigilance” is dubious at best.
So that’s what we have here. Algorithms being used without regard or care to how it does what it does. Maybe rise of these biases is not entirely correct – algorithms have always behaved in this way. We are just now see what happens when we put them to use in a larger playing field with actual consequences.
AI fear? Yes, and quite a big one – your thoughts on this conversation are welcome, comment below.
*As it pertains to algorithms – which if I am the first person to call it that W00T! If not – sorry I didnt reference you – contact me and I will put your credit right here:)
DISCLAIMER: None of the ideas expressed in this blog post are shared, supported, or endorsed in any manner by either author’s employer or company. This is a personal blog. Any views or opinions represented in this blog are personal and belong solely to the blog owner and do not represent those of people, institutions or organizations that the owner may or may not be associated with in professional or personal capacity unless explicitly stated. Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual. All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information. Downloadable Files and Images Any downloadable file, including but not limited to pdfs, docs, jpegs, pngs, is provided at the user’s own risk. The owner will not be liable for any losses, injuries, or damages resulting from a corrupted or damaged file. Comments :Comments are welcome. However, the blog owner reserves the right to edit or delete any comments submitted to this blog without notice due to : Comments deemed to be spam or questionable spam. Comments including profanity. Comments containing language or concepts that could be deemed offensive. Comments containing hate speech, credible threats, or direct attacks on an individual or group. The blog owner is not responsible for the content in comments. This blog disclaimer is subject to change at any time.