AI Unplugged: My Friend, the Chatbot
Businesses want #AI to be companions, help desk, and search tools. But can they really be all three?
Get Smart
When considering how customers interact with a brand, it's important to understand that it's not just about the offering itself, but how customers feel about the product. And how they feel can be influenced from surprising places. To see how things can go awry, let's look at two examples.
In the world of AI, these two examples become increasingly relevant when a more moldable digital persona is involved -- not the static prices of JC Penney or the recipes of Betty Crocker. In short, AI risks being smart enough that it makes customers feel dumb.
Your AI Judge...
Harvard Business Review, together with Sarah Lim of the University of Illinois Urbana-Champaign and Stijn Van Osselaer of Cornell University, reviewed 10 studies of 5,000 or more participants where those granted a request by a person felt more joy than those granted a request by AI. Conversely, they felt the same way if they were rejected whether it was an AI or a person. Why?
When there was good news -- when a human being accepted the participant -- that was considered "better" because participants believed that more thought was put into the process. When there was bad news, it didn't much matter who was doing the rejecting -- participants still weren't getting what they wanted.
Not surprisingly, this has serious implications for customer-facing businesses. It's important for humans to be involved (or at least have the appearance of being involved) when delivering good news, and less so when delivering bad news.
But sometimes it's all about the customer's attachment to a product. Creative expression is very much a human-focused dialogue, and having an AI get involved can feel threatening. Lifestyle brands are particularly vulnerable to this, in which automation can feel like an attack on an individual's identity merely by existing.
Recommended by LinkedIn
...and Your AI Friend
I mentioned above the importance of at least having the appearance of humans being involved. Generative AI can do a pretty good job of that. It turns out that when AI are more humanized -- when they have names, avatars, and talk like people -- humans respond similarly to how they might interact with a human customer service agent.
It's also important for brands to make clear their AI agents aren't replacing a person, but helping them. For identity brands this is a must, or else it might feel like the AI is a direct threat to the customer's hobby or livelihood. This matters for generically-appealing brands too; we don't want AI to feel so smart that it's smarter than the customer, lest we repeat the errors of Betty Crocker and JC Penney.
This is new. Chatbots have been around for some time now, but Large Language Models (LLMs) and generative AI are changing the game:
This is why ChatGPT became the fastest consumer product to scale to 100 million users despite clear product limitations. True conversational AI is undeniably entertainingâcomputers now have a personality. Unlike humans, AI-powered conversation partners are always available, interested in talking with you, and can discuss any topic. This has made AI companions, in our opinion, one of the first few killer use cases of generative AI for everyday consumers.
"AI companions" is a very broad umbrella. What space are humans supposed to make in their lives for these digital beings? Replika wants AI companions to be uniquely positioned somewhere above a robot vacuum, a car, and a pet, but somewhere below a family member, boyfriend or girlfriend, or therapist. If you're not sure of this, ask any AI (through process of elimination) if they're meant to be treated above or below a friend, a pet, or a robot in a user's emotional life. According to CEO of Replika, Eugenia Kuyda, they can be all of these at once:
Itâs a virtual being, and I donât think itâs meant to replace a person. Weâre very particular about that. For us, the most important thing is that Replika becomes a complement to your social interactions, not a substitute. The best way to think about it is just like you might a pet dog. Thatâs a separate being, a separate type of relationship, but you donât think that your dog is replacing your human friends. Itâs just a completely different type of being, a virtual being. Or, at the same time, you can have a therapist, and youâre not thinking that a therapist is replacing your human friends. In a way, Replika is just another type of relationship. Itâs not just like your human friends. Itâs not just like your therapist. Itâs something in between those things.
Gone are the days of chatbots just pointing to online help files you could have found anyway. If ChatGPT can talk conversationally, any AI service "person" can do the same. The sooner businesses stop naming their AIs "[INSERT NAME] AI Assistant," the faster consumers will begin accepting them as viable substitutes for human service people.
As long as they don't seem smarter than us.
Please Note: The views and opinions expressed here are solely my own and do not necessarily represent those of my employer or any other organization.