Business, Technology

How The Visual Web Is Replacing Focus Groups for Consumer Market Research

visual processing

This guest column is authored by Brian Kim, VP of Product Management at GumGum

“Crowdsourcing has replaced focus groups,” an executive recently declared in a New York Times story titled “Crowdsourcing to Get Ideas, and Perhaps Save Money.” The executive happens to have a vested interest in proclaiming the death of the focus group—his company, UberTesting, offers access to consumers who are willing to be recorded remotely as they try out products and navigate through websites—but he has a point.

Visually-driven market research of the crowdsourced variety is definitely having its moment. The old focus-group model of coaxing a dozen or so strangers into a conference room to critique a product is still a huge business, but more and more brands are looking for market-research solutions that are less costly, more immediate and more authentic.

Remotely corralling consumers out in the actual marketplace—as opposed to making them vent, in person, to a moderator in a fluorescent-lit mall conference room somewhere—can surely offer real-world information and nuances that traditional focus groups simply aren’t designed to elicit.

But the truth is, a lot of marketers are overlooking the wealth of product-centric visual information that’s within immediate reach. Consumers are increasingly flooding social-media channels with feedback about products not just through text-based expressions, but by sharing images of products they’re using every day with no prompting or incentive.

In other words, authentic, real-world, visually-driven consumer feedback about your products is out there already—you just have to find it and make sense of it.

Thanks to image-recognition technology, consumer product shots can now be surfaced by brands regardless of whether or not a consumer name-checks the brand; logo-detection algorithms can be programmed to pick up on a distinctive logo on a product package—or any logo on any package.

For example, an Instagram post from PETA labeled “Vegan Snacks at 7-Eleven!” includes such products as Cliff Bars and Sabra Humus—and it was “liked” more than 1,500 times. Again, an image like this can be surfaced through image-recognition technology that recognizes distinctive product logos, even though the Instagram post’s caption didn’t mention any of the products shown in the photograph.

Should the makers of these products do a better job of pointing out their vegan-friendliness? Those are product-positioning and packaging-design judgment calls—judgment calls that Instagram images like this one bring to the fore.

In one recent case, the makers of condiments maker Hidden Valley Ranch, long marketed as a salad dressing, used image-recognition technology to find social-media images of their product in use. What they discovered was that more and more consumers were using it as a dip for bar foods such as chicken wings. Using that real-world intel, Hidden Valley Ranch revamped its product packaging to position it as a “Topping and Dressing”—and the version of Hidden Valley Ranch that comes in a squeeze bottle even includes a “serving suggestion” photo that shows it as a dip for veggies and wings.

Even just a few years ago, it was impossible to “listen” to uncaptioned social-media images, much less get in-the-field market intelligence from such a broad and numerous group of people.

But thanks to massive advances in image-recognition technology, the millions of consumers “talking” every day about products—product benefits, product shortcomings, use cases and more—via social-media images can finally be heard. And smart companies will use that visually crowdsourced intelligence to optimize the positioning of their products.

Image Credit – GumGum

Have ideas to share? Submit a post on iamwire

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>