Classifieds

Beware of Using ‘Digital Focus Groups’ for Human Insights

[ad_1]

Noted advertising futurist Rishad Tobaccowala recently published an article in which he claims, in response to Bill Gates, that “AI will have far greater impact than the microprocessor, the PC, the internet and the mobile phone.” And he’s not wrong. AI will fundamentally change our lives. 

But there’s one problem: Brand managers are significantly misinterpreting what it means for advertising and marketing. Case in point is the recent flurry of activity and articles about brands creating AI-generated customers for research purposes. This movement goes by many handles, including “digital copycats” and “digital twins,” where, for example, ChatGPT gets answers from AI-generated consumers about a variety of products. According to one CEO, “You can essentially embody all the characteristics of specific consumer groups [as digital twins] and start interrogating them.” 

These AI consumer gatherings are also being fashioned—I believe dangerously—as “digital focus groups.” In fact, some go so far as to say that in the future, brands could market to machines that stand in for real people. 

The problem is that intelligent managers are following the hype down deceiving rabbit holes. They are being led to believe that AI—by gathering anonymized consumer feedback from zip codes across the U.S.—can provide them human feedback. When marketers delude themselves that a fundamentally quantitative function—which crunches piles of data, no matter how elegantly—will give them qualitative insights, they are in for a sea of trouble.

If you are looking for emotional insights to create strong brand positionings, I would take real humans over digital ones any day. Gathering real human insights is expensive and messy, but it is where all the emotional juice that drives great brands lives. Anyone who really understands advertising research knows, for example, that the most expensive and messy form of qualitative research—consumer ethnographies—often uncovers the deepest human truths. 

So, let’s call this what it is: deep dives into data. Is it valuable? Yes. Is it truly human? No.

It can be argued that AI can easily aggregate qualitative results, like past focus groups or live social media. But these are either old information or narrow, highly skewed inputs (e.g., Tweets). How skewed? For example, when Microsoft unveiled Tay—which was an AI chatbot that learned how to engage people through real and casual conversation—after only a few hours, Tay started tweeting offensive and racist rants. Why? Because it was learning from the racist rants people on Twitter were throwing at it. You can’t scrape the internet and create an accurate representation of your average consumer; and you can’t create one that has any emotional depth. 

1 2Next page

Related Articles

Back to top button