Business leaders and workers have been talking for a long time about how artificial intelligence (AI) might change the future of work. Some of it’s good. Some of it is not. More recently, the development of more sophisticated AI tools has led to a heightened level of discussion on the topic. In addition, ground-breaking advancements—as we see from the rapid rise of ChatGPT—have brought the conversation mainstream.
I sat down with Kate O’Neill—best-selling author and founder of KO Insights—who possesses an interesting perspective not widely held by senior leaders and executives regarding AI. Known as the “Tech Humanist,” Kate believes that AI should optimize the human experience, not replace it. She believes businesses need to think beyond how technology can help them meet their business goals. In particular, leaders must consider the positive and negative impacts of new technologies, like AI, on people.
The Human Factor
She often tells her clients, “Let’s not forget to think about the people part of the organization that will be affected by how we roll out the technology.”
Kate contends that the role of technology in the future of work is a crucial subject to get right. However, the mindset frequently coming from executives and leaders is more concerned with boosting profits and efficiencies than with people’s welfare. As a result, the human factor goes missing.
We rarely hear executives or senior leaders publicly point out how technology will affect humans. However, the anxiety that team members might experience—where they might be wondering how they will be able to continue contributing inside the organization as more of the workflow is automated—can become quite critical. Severe anxiety and stress can be put on workers if leaders are clandestine and closed in their actions, refusing to discuss their plans or how AI might impact people.
Kate suggests we’ve yet to honestly discuss AI’s potential to displace or replace labor.
“We’ve had a very freaked out, ‘hair on fire’ version, and we’ve also had a very dismissive version of it,” she observed. “But we haven’t had the real conversation where we acknowledge that both of those things will happen, so we need to understand what that looks like.”
Meaning and AI
Part of that employer-employee conversation should center around a team member’s feelings or understanding that they contribute to the organization’s goals. A sense of meaning is what separates humans from animals. With the time people spend at work, meaning is essential for job satisfaction. So we should ask: will meaning materialize for workers if they constantly think AI will replace them?
The conversations around artificial intelligence have touched upon how technology will impact social structures, but Kate would like to see “universal basic meaning” discussed too. She encourages organizations to look at the bigger picture of using technology to automate processes and not to forget the concept of meaning.
She said, “I think we haven’t been creative enough about what types of new oversight these tools require, especially regarding how to wrap human nuance and emotional intelligence around the processes that we automate.”
Artificially Intelligent Bias
As Kate rightly points out, human nuance and emotional intelligence, or the lack thereof, will impact internal users, customers, and society. For example, she warns that AI writing tools are limited by the data they work with. These tools aren’t designed to write like the user but to mimic human writing.
This has created new conversations around “tech ethics” and how to ensure AI tools are being “trained” ethically with unbiased data as its source.
For example, an AI writing tool that was trained with biased data could generate writing that has racist, sexist, or misogynistic undertones. The tech ethics community is looking at how to mitigate and eliminate this bias. Still, Kate recommends that business owners implement quality control measures to avoid these issues before fully launching any AI product or service.
Getting Public About AI
AI isn’t all bad, though. During her experimentation, Kate has found that AI writing tools help her productivity when she uses them to research and overcome writer’s block. “If I’m having trouble with a sentence, I just ask it to start the next sentence,” she said. “Usually, it’s completely wrong, but knowing what you don’t want to write is just as important as knowing what you do want to write.”
With each new technological development, there is typically a trade-off between convenience and privacy. In the existing state of affairs, the ordinary public has no voice in this conversation. Currently, the general public does not have a say in this discussion.
As Kate rightly points out, “We [the general public] are not making that decision. I think that law enforcement is making that decision. I think business is making that decision at an operational efficiency level. Those are not conversations that incorporate the larger public in what they feel is an appropriate risk to take or an appropriate trade-off in the long term.”
She provides an example of facial recognition, which is no longer used solely by law enforcement. Instead, businesses are using facial recognition as a convenience measure to shave off mere seconds, for example, in the payment process.
AI isn’t all bad, though. During her own personal experimentation, Kate has found that AI writing tools help her productivity when she uses them to research and overcome writer’s block. “If I’m having trouble with a sentence, I just ask it to start the next sentence,” she said. “Usually, it’s completely wrong, but knowing what you don’t want to write is just as important as knowing what you do want to write.”
She also points to the Starbucks app that allows users to order their drinks on their commute, identifying certain aspects through artificial intelligence. “They studied the customer experience and thought about everything possible to optimize their business on the side of the barista and optimize the customer experience.”
It’s clear we have a way to go in terms of getting AI right. It seems to me, however, that a lot more thoughtful conversations are required before businesses make any 180-degree shifts. And let’s certainly not lose our humanity in the process.
Watch the full interview with Kate O’Neill and Dan Pontefract on the Leadership NOW program below, or listen to it on your favorite podcast.
Pre-order my next book publishing in October, Work-Life Bloom: How to Nurture a Team That Flourishes, (You won’t want to miss digging in.)