Protecting The Human: Ethics In AI – Forbes

Posted: June 11, 2021 at 11:53 am

Protecting the Human: Ethics in AI

When we think about the future of our worldand what exactly that looks like,itseasy tofocus on the shiny objects andtechnologythat make our lives easier: flying cars,3D printers,digital currenciesand automated everything.In the opening scene of theanimated film WALL-E which takes place in the year 2805a song fromHello, Dolly! happily plays in the background, starkly contrastingthe glimpse we get of our future planet Earth:an abandoned wastelandwithheaping piles of trash around every corner.Humans had all evacuated Earth by this point and were living inaspaceship,where futuristic technology and automation left them overweight,lazyand completely oblivious to their surroundings.Machines do everything for them, from the hoverchairs that carry them around, to therobots that prepare their food.Glued to their screens all day, which have taken control of their lives and decisions,humansexhibitlazybehaviors like video chatting the person physically next to them.

While yes, this is an animated,fictitiousfilm, many speculate that this could be somewhat of an accurate depiction of our future, and I tend to agree. Advancements in AI and technology are meant to make our lives easier,yetthey posea threat to society whenthey arenot perfect.Today, businesses andindividuals face many challenges with AI:from techand social mediagiants controlling speech ontheir platformstoservices and technologiesthat speed up processes but apply unintentional bias.When we start relying on algorithms to make decisions for us, thats whenthings begin to take a turn for the worse, and we get one inch closer to living ina place not too far off from the environment we see in WALL-E.AIcantjustbe good enough for usto create a better world for ourselves itmustbe perfect.Hereswhy:

An overreliance on AI amplifiesthe biases that weshould be eliminating.

As each year passes, the global use of AI continues to grow. While advancements in AIshould be making our lives easier,theyrealsohighlightingsome of our implicit biases thatmany are working hard to eliminate.Astudy from MITfound that gender classification systems sold byseveral major tech companies had an error rate as much as 34.4 percentage points higher for darker-skinned females than lighter-skinned males.Likely due to skewed data sets,examples like this presentamyriadofproblems in decision making, especiallyin employmentrecruitingand criminal justice systems.Algorithms that exclude female candidates fortraditionally male-dominated jobs,oralgorithmsthat determine a criminals risk score heavily weighted in appearance versus actions,are only amplifying the biases that weshould be removing.

A black-box approach to AI puts our first amendment rights at risk.

A black box systemin which users lacktransparencyofalgorithm developmentandmodel trainingalong withknowledge as towhy models make the decisions that they doisveryproblematic inthe ethics of AI. We as humans all have blind spots, so the creation of models and algorithms shouldinvolveelevatedhuman contextandnot just more powerful machines.If we punt all of our decisionsto an algorithm andwe no longer knowwhatsgoing on behind the scenes,the use of AIrisksbecomingirresponsibleat best and unethical at worst, even puttingour first amendment rights at risk.One studyfrom the University of Washingtonfound that leading AI models foridentifyinghate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans.Biasesin hate-speech tools have the potential tounfairly censor speech on social media, banning only select groups ofpeople or individuals.By implementing a human-in-the-loop approach,humans get the final say in decision making and black-box bias can be avoided.

Theethical use of AIisdifficult to regulate.

When we start relying on AI to make decisions for us, it often does more harm than good.Last year, WIRED published an article calledArtificial Intelligence Makes Bad Medicine Even Worse, which highlights howdiagnoses powered by AI arent always accurate, and when they are,theyrenot alwaysnecessary to treat.Imaginegetting screened for cancer without having any symptoms and being told that you do in fact have cancer, but later finding out thatit was just something that looks like cancer, and the algorithm was wrong.Whileadvancements in AI shouldbechanging healthcare for the better,AI in an industry like this absolutely must beregulated in a way where the human is making the final decision or diagnosis, rather than a machine.If we remove the human from the equationand fail to regulate ethical AI, we riskmaking detrimental errorsin crucial, everyday processes.

Protecting the Human: Ethics in AI

AIneeds tobe better than good.To protect the human, ithas tobe perfect.If we begin to rely on machines to make decisions for uswhenthe technology is good enough, weamplify biases,risk our first amendmentrightsand fail to regulate some of the most crucial decisions.An overreliance on less-than-perfect AI may make our lives easier, but itwill also make us lazier andpotentially accepting ofpoor decisions. At what point do we begin to rely on the machine for everything? And if we do, will we all end up evacuatingan uninhabitable planet Earth,relying on hoverchairs to carry us around andmachines to prepare our food for the rest of our lives just like in WALL-E?As AI advances, we must protect the human at all costs.Perfect is the enemy of good, but for AI, it needs to be the standard.

Read the original post:

Protecting The Human: Ethics In AI - Forbes

Related Posts