AI Unplugged: My Friend, the Chatbot
https://www.bing.com/images/create/a-banner-image-suitable-for-use-with-a-newsletter-/1-66270c611f6b4fa69119cc1733b44f3e

AI Unplugged: My Friend, the Chatbot

Businesses want #AI to be companions, help desk, and search tools. But can they really be all three?

Get Smart

When considering how customers interact with a brand, it's important to understand that it's not just about the offering itself, but how customers feel about the product. And how they feel can be influenced from surprising places. To see how things can go awry, let's look at two examples.

  • Example #1: When CEO Ron Johnson took over at JC Penney, he abandoned JC Penney's ubiquitous coupon sales and replaced them with everyday low prices. In theory, everybody won: customers got lower prices and retailers didn't have to worry about constantly changing the store signs for the latest sale. The approach was modeled after Apple, a luxury product that never has coupons or sales. The strategy flopped; coupons made customers feel smart and encouraged "buzz" by having them talk to others about it. Johnson didn't last, and his successors reversed course, reintroducing coupons and deals.
  • Example #2: When Betty Crocker launched cake mixes in the 1950s, it was a simple recipe: mix water with the ingredients and bake. But the product didn't sell well. It turned out that baking was TOO easy, and it felt to consumers like they were cheating when they used the cake mix. So the company removed egg powder from the ingredients and asked customers to beat an egg into the mix. Sales rose, and we're still adding an egg 70 years later.

In the world of AI, these two examples become increasingly relevant when a more moldable digital persona is involved -- not the static prices of JC Penney or the recipes of Betty Crocker. In short, AI risks being smart enough that it makes customers feel dumb.

Your AI Judge...

Harvard Business Review, together with Sarah Lim of the University of Illinois Urbana-Champaign and Stijn Van Osselaer of Cornell University, reviewed 10 studies of 5,000 or more participants where those granted a request by a person felt more joy than those granted a request by AI. Conversely, they felt the same way if they were rejected whether it was an AI or a person. Why?

When there was good news -- when a human being accepted the participant -- that was considered "better" because participants believed that more thought was put into the process. When there was bad news, it didn't much matter who was doing the rejecting -- participants still weren't getting what they wanted.

Not surprisingly, this has serious implications for customer-facing businesses. It's important for humans to be involved (or at least have the appearance of being involved) when delivering good news, and less so when delivering bad news.

But sometimes it's all about the customer's attachment to a product. Creative expression is very much a human-focused dialogue, and having an AI get involved can feel threatening. Lifestyle brands are particularly vulnerable to this, in which automation can feel like an attack on an individual's identity merely by existing.

...and Your AI Friend

I mentioned above the importance of at least having the appearance of humans being involved. Generative AI can do a pretty good job of that. It turns out that when AI are more humanized -- when they have names, avatars, and talk like people -- humans respond similarly to how they might interact with a human customer service agent.

It's also important for brands to make clear their AI agents aren't replacing a person, but helping them. For identity brands this is a must, or else it might feel like the AI is a direct threat to the customer's hobby or livelihood. This matters for generically-appealing brands too; we don't want AI to feel so smart that it's smarter than the customer, lest we repeat the errors of Betty Crocker and JC Penney.

This is new. Chatbots have been around for some time now, but Large Language Models (LLMs) and generative AI are changing the game:

This is why ChatGPT became the fastest consumer product to scale to 100 million users despite clear product limitations. True conversational AI is undeniably entertaining—computers now have a personality. Unlike humans, AI-powered conversation partners are always available, interested in talking with you, and can discuss any topic. This has made AI companions, in our opinion, one of the first few killer use cases of generative AI for everyday consumers.

"AI companions" is a very broad umbrella. What space are humans supposed to make in their lives for these digital beings? Replika wants AI companions to be uniquely positioned somewhere above a robot vacuum, a car, and a pet, but somewhere below a family member, boyfriend or girlfriend, or therapist. If you're not sure of this, ask any AI (through process of elimination) if they're meant to be treated above or below a friend, a pet, or a robot in a user's emotional life. According to CEO of Replika, Eugenia Kuyda, they can be all of these at once:

It’s a virtual being, and I don’t think it’s meant to replace a person. We’re very particular about that. For us, the most important thing is that Replika becomes a complement to your social interactions, not a substitute. The best way to think about it is just like you might a pet dog. That’s a separate being, a separate type of relationship, but you don’t think that your dog is replacing your human friends. It’s just a completely different type of being, a virtual being. Or, at the same time, you can have a therapist, and you’re not thinking that a therapist is replacing your human friends. In a way, Replika is just another type of relationship. It’s not just like your human friends. It’s not just like your therapist. It’s something in between those things.

Gone are the days of chatbots just pointing to online help files you could have found anyway. If ChatGPT can talk conversationally, any AI service "person" can do the same. The sooner businesses stop naming their AIs "[INSERT NAME] AI Assistant," the faster consumers will begin accepting them as viable substitutes for human service people.

As long as they don't seem smarter than us.

Please Note: The views and opinions expressed here are solely my own and do not necessarily represent those of my employer or any other organization.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics