Artificial intelligence is here to stay, though what is meant by the term is often confusing. There are the large language models of ChatGPT fame, which use probabilistic models to develop text and even images from existing sources that will appear to humans as indistinguishable from that which we produce ourselves. However, most of what we consider to be AI is still very much based on algorithms, not large language models.

Algorithms are a series of if-then statements that can be as simple or as complex as desired. Even when simple, algorithms can be very powerful. Social media sites, such as Facebook, use algorithms to determine what content is pushed to us.

Consider a recent experiment in which a brand new email address was created by journalists in order to sign up for Facebook, with the only information provided being that the owner of the address was a 24-year-old male. The journalists took no action that might indicate any preferences whatsoever — not “liking” or posting or interacting with anything — so that they could begin to open the black box of Facebook’s algorithms. By the third month, “highly sexist and misogynistic images ... appeared in the feed without any input from the user.”

This is extremely troubling, but it is also subtle enough that we don’t necessarily see the peril immediately. That certainly was the case for Molly Russell, the young British girl who felt sad and said so on social media — only to be proffered by social media algorithms more and more extreme videos of self-harm until she took her own life. Facebook whistleblower Frances Haugen noted at an inquest that the algorithms likely showed Russell harmful content before she had even searched for it, just like what unfolded in the journalists’ experiment.

The power of the algorithms is simply not apparent until we are shocked into realizing the harm they have done. Because the algorithms are secret — the sanitized term is “proprietary” — they remain behind the curtain until the occasional reveal of their utter inhumanity. Who in their right mind — or right heart — would push self-harm videos into the social media feed of a sad teenaged girl?

What is worse is that the use of algorithms is now seeping into our governance structures. Indeed, as a co-editor of “The Oxford Handbook of AI Governance,” I have tried, along with my colleagues, to illuminate the issues of concern and to press urgently for greater regulation of the use of AI, including the use of AI by government institutions.

It has been argued, for example, that AI can ensure that the decisions made by government officials such as judges and police officers are less biased and more consistent. For example, China is pioneering “smart courts,” which began as an effort to provide AI support for judges, with AI offering recommendations, relevant case precedents and drafting of documents, but has moved beyond decisional aid to greater decisional power. According to some reports, if a judge disagrees with the AI algorithm’s recommendation, he or she must provide a statement justifying that choice.

This creep from AI algorithms as an aid to AI as decision-maker is subtle and gradual, but has immense implications. Again, we will only see those implications when we are shocked by them.

Related
Perspective: Big AI has already gone rogue. Where is the regulation?
Perspective: The right way to deal with AI-generated child pornography

In Utah, police officers are now required to perform a lethality assessment when called out to domestic violence situations, which is a positive step forward in removing officers’ unconscious biases in such volatile circumstances. Spain went one step further — perhaps a step too far — with this idea. When domestic violence perpetrators are considered for bail or release in Spain, a similar lethality assessment is used to determine whether the victim would be at risk if the perpetrator were freed. The greater the risk, the more protection the victim will be offered.

How is this determination of risk made? Not by a human, but by an algorithm, of course. If the algorithm determines the victim is at low or negligible risk, the judge is then justified in releasing the perpetrator back into the community and not providing support to the victim. I bet you can guess the resulting headline: “An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.”

When 32-year-old Lobna Hemid reported to the police that her husband was beating her, they asked her 35 questions. The answers were fed to the algorithm used in Spain to assess risk of future harm, a system called VioGén. VioGén duly produced a score, determining that Hemid was at low risk of future harm, and so her husband was released from jail. Seven weeks later, he brutally stabbed her to death and then killed himself. Her four young children were in the house when it happened.

Indeed, up to 14% of the women deemed to face low or negligible risk by VioGén were found to have been subsequently harmed. And in a judicial review of 98 homicides of women whose situations were scored by VioGen, 56% had been deemed at low or minimal risk by the algorithm.

7
Comments

Of course, humans can override the system; in Spain, the police are taught they can reject VioGén’s conclusions. However, 95% of the time, VioGén’s determinations are accepted. And why not? It’s easier for the police to defer to VioGén, and the algorithm will be to blame if women wind up being harmed. Troublingly, “The government also has not released comprehensive data about the system’s effectiveness and has refused to make the algorithm available for outside audit.”

Secret algorithms, powerfully affecting human lives, are not only not regulated, but increasingly serve as convenient scapegoats if anything goes wrong. We humans are outsourcing our moral and ethical judgment in situations where lives literally hang in the balance. We know people will be harmed by this outsourcing — what is perhaps most chilling is that no one seems to care enough to stop this trend.

While some see AI in apocalyptic terms — think Skynet from “The Terminator” — there are more subtle, less obvious perils that already beset AI’s integration with human society. How many more Molly Russells, how many more Lobna Hemids will there be while we look the other way?

Valerie M. Hudson is a university distinguished professor at the Bush School of Government and Public Service at Texas A&M University and a Deseret News contributor. Her views are her own.

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.