post

December 16, 2021

Ethics Of Experiential AI

Home > Blog > Ethics Of Experiential AI

“We shape our tools and thereafter our tools shape us.”

I had the extremely good fortune of growing up under the tutelage of family friend and neighbor (and renowned Canadian philoso- pher) Marshall McLuhan. A man who more or less predicted the World Wide Web 30 years before it emerged, McLuhan uttered countless impactful and prophetic statements. Here’s one that applies readily to experiential AI:

“We shape our tools and thereafter our tools shape us.”

I thought of this quote recently, as I overheard my kids lightly tormenting Alexa for not delivering reasonable answers as quickly as they wanted them. There was nothing particularly alarming about it—after all, Alexa has no feelings to hurt—but it inspired yet another ethics discussion with colleagues.

On one hand, why should we care about how kids treat an inanimate presence that has no semblance of emotion (and really only a semblance of presence)? On the other hand, what does it say about our species if we default to rude or impatient behavior with a conversational interface simply because it can’t be offended? It’s also possible that fifty years from now we will be interacting with machines that do have something approximate to feelings. Through the lens that we might be shaping tools that will end up shaping us, maybe we should at least be pleasant with our IDWs.

To quote McLuhan again, “the medium is the message.” With conversational AI, every message is one where we’re influencing people’s behavior. In designing conversational experiences with machines we’re always teaching—creating and reinforcing behaviors that will affect all the conversations we have. We’re not just designing the interactions with machines, but also interactions people will have with each other. This is especially true for children, who were born into a world run by technology.

“Our minds respond to speech as if it were human, no matter what device it comes out of,” Judith Shulevitz writes in The New Republic

Evolutionary theorists point out that, during the 200,000 years or so in which homo sapiens have been chatting with an “other,” the only other beings who could chat were also human; we didn’t need to differentiate the speech of humans and not-quite humans, and we still can’t do so without mental effort. (Processing speech, as it happens, draws on more parts of the brain than any other mental function.)

This would suggest that even though the experience of communicating through conversation can feel almost effortless, it’s extraordinarily complex behind the scenes.

It’s also worth remembering that, in 2019, Alexa told a woman in the UK that she should stab herself in the heart when asked about the “cardiac cycle.” To be fair, Alexa was pulling verbiage from a wikipedia page when it said, “Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation …”

The methods through which artificial intelligence gains it’s so-called intelligence are no less fraught. You don’t have to look far to find issues relating to inequity, resource distribution, and climate change.

Tinmit Gebru, a respected AI ethics researcher, has done writing and research highlighting how facial recognition can be less accurate at identifying women and people of color, and how that leads to discrimination. The team she helped forge at Google championed diversity and expertise, but she was forced out of the company over conflict surrounding a paper she co-authored.

The circumstances surrounding Gebru’s exit are contentious and the sequence of events are unclear, but MIT Technology Review obtained a copy of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” which examines the risk associated with large language models—AIs trained on massive amounts of text data.

“These have grown increasingly popular—and increasingly large—in the last three years,” writes Karen Hao. “They are now extraordinarily good, under the right conditions, at producing what looks like convincing, meaningful new text—and sometimes at estimating meaning from language. But, says the introduction to the paper, ‘we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.’”

Hoa notes that Gebru’s draft paper focuses on the sheer resources required to build and sustain such large AI models, and how they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” they write in the paper.

There’s also the matter of unconscious bias, which has the potential to infect AI systems. Speaking about her book, The End of Bias: A Beginning, to PBS, author Jessica Nordell had this to say:

I don’t think it’s a stretch to say that bias affects all of us every day because any time a person is interacting with another person, there’s the opportunity for stereotypes and associations to infect the interaction. These reactions can often happen so quickly and automatically that we don’t actually know we’re necessarily doing them. These are reactions that conflict with our values.

The ramification of unchecked stereotypes making their way into powerful technologies that have decision-making power, is frightening to consider. On the other hand, if we are careful to remove any and all bias from these emerging systems, AI could make for more impartial decision-makers than humans could ever be. It seems to me that trying to remove the bias toward self-interest within ourselves may be a greater challenge than solving the problem of not equipping machines with unbiased data.

Then there’s the more philosophical question of our very purpose as sentient bags of meat. If machines can outperform us on more and more of the tasks that were once within only our domain, what’s left for us to do? What value do we have as beings and, more bleakly, what value do we then have to a superior network of machine intelligence? It’s easy to see why the awesome stature of this new wave of technology gets people thinking in terms of Skynet, cyborgs, and other extinction-level events. I try to look at it another way.

My hope is that as machines begin performing the tasks that most humans find utterly redundant and soul-sucking (e.g. call centers, XX, etc.), we will be freed up to do what humans do best: solving problems in creative ways. This is something that is of huge benefit to society.

Allowing innovators to innovate (or in more pedestrian terms, allowing creative people to create) is really tied to the degree to which you can absolve them of other chores. When orchestrated correctly, technology can complete chores with staggering efficiency.

This goes beyond just letting humans spend more time being creative, however. According to psychologist Abraham Maslow there’s a hierarchy of needs ranging from basic (“physiological” and “safety”) to psychological (“belonging and love” and “esteem”) to self actualization needs that people need to move through in order to reach their full potential. These are all areas where technology has provided a boost.

It’s easy to imagine scenarios where conversational AI can meet our needs across the entire spectrum. The World Economic Forum recently reconfigured Maslow’s hierarchy for the digital age and used it as a rubric against a global survey of over 43,000 people across 24 countries, exploring “what an individual requires to achieve their potential in today’s tech-driven landscape.”

Their study suggests that while there are drawbacks to the pervasive nature of technology (only 38% of respondents felt like they had a healthy balance in personal use of technology), many of the negative responses surrounding technology’s role in fulfillment were rooted in access and training. Fortunately, these are things that conversational AI deployed properly can address.

The OneReach.ai platform was designed for orchestrating hyperautomation relies on conversational AI, it gives everyone access to technology that can let them be the best versions of themselves. If more people have access to technology that requires almost no training to use, technology can continue to elevate people in personalized ways. If technology is the thing that’s making society better, sharing this kind of technology across societies has the power to raise the quality of life for everyone.

Later in life, Maslow added another level to his hierarchy, “transcendence.”

“Transcendence refers to the very highest and most inclusive or holistic levels of human consciousness,” he wrote, “behaving and relating, as ends rather than means, to oneself, to significant oth- ers, to human beings in general, to other species, to nature, and to the cosmos.”

If Terminator is the dark end of the conversational AI spectrum, maybe transcendence is at the opposite pole. A lofty goal, but
as conversational AI allows technology to become exponentially more efficient and less of a physical presence (conversation is an interface that can drastically reduce time spent in front of a screen) who’s to say can’t occupy a support role that lets us be more present beings who can open pathways to higher levels of consciousness.

When talking about AI internally, these are critical to address:

  • Will AI replace personnel?
  • Will AI take your organization to new and compound- ed frontiers of productivity?
  • Will AI narrow the gaps in our society?

The answers to these questions aren’t easy and will d pend greatly on what sort of activities we engage in and how we all choose to approach the implementation of these new technologies.

It’s unlikely that you’ll be using a large-language model, but you will still need to account for the way your ecosystem collects, interprets, and shares information.

Stay up to date

Subscribe and receive updates on what we are reading, writing and other stuff going in the worlds of OneReach.ai