Skip to main content
Best Practice: Idea Stimulus

Learn best practices for designing idea stimulus

Leigh Greenberg avatar
Written by Leigh Greenberg
Updated over a week ago

In this article:


Idea Formatting FAQ

Why does my Idea format matter?

Ideas that are low quality, confusing, or inconsistent run the risk of clouding your data. To get strong insights, you'll need to begin with strong survey design, which extends to Idea formatting.

Isn't it always better to show visuals so respondents have more to work with?

Not necessarily. Launch-ready visuals can certainly provide a more accurate representation of a product than just text. However, images that are closer to a draft than market-ready can introduce unintended biases. Plus, generating visuals just for a study, especially when doing early-stage testing with dozens of ideas, can be time consuming and costly.

So, text is better?

Visuals can be immensely helpful if an Idea is very novel and unfamiliar to respondents. But, in most cases, text will be sufficient to get great insights. Plus, using text-only Ideas minimizes all of the potential confounds that images may introduce.

Will I get different results if I use text-only vs images?

We have found a very strong correlation in Idea Scores (our metric for in-market performance) when the same products are tested as text-only vs as images. Poor visual representations tend to drive down Idea Score relative to text-only. Conversely, text-only Ideas perform worse relative to visuals if they are foreign or hard to grasp with text alone (e.g. unknown flavors, unprecedented package designs).


Ultimately, it's up to you to decide how to show your Ideas! It's always helpful to have multiple pairs of eyes reviewing to weed out any potential barriers.

Curious how we came to these conclusions? See the study below!


Research on Research: Stimuli for Tests White Paper


Introduction

In the innovation testing industry, there is debate around how to show stimuli – is it better to test your ideas as written text or to develop visuals?

The argument for testing visuals:

  • Visuals are more engaging and improve the respondent experience.

  • If the visuals are launch ready, they are a closer representation of what consumers will see in market.

  • Visuals can help to convey a truly new idea for which respondents may not have a mental reference.

The argument against testing visuals:

  • Developing visuals adds significant time and cost, often requiring resources from cross-functional teams or outside agencies.

  • The process of developing launch-ready visuals forces teams to redirect their energies from ideation to execution.

  • Draft visuals may misrepresent the idea, driving stronger or weaker results vs. what we will see with the market-ready visuals.

This paper reports the results of a parallel test where the same ideas were tested as text only vs. tested with packaging visuals.

Methodology

The research was fielded with 600 people from the UK who consumed a chocolate bar in the past 4 weeks. The exercise was framed as testing chocolate bar flavors.

We split the sample into two demographically balanced groups where the only difference was the stimuli.

Group 1 was exposed to 14 chocolate bar flavors represented by text only. Group 2 was exposed to 14 chocolate bar flavors represented by packaging visuals.

Calculating Idea Score

To understand the performance of the flavors across the two studies, we used our proprietary Idea Score which is a composite metric based on Interest and Commitment scores. The Idea Score has been calibrated using a Hierarchical Bayesian (HB) Linear Model to optimally predict in-market sales.

Consistent overall Results (Fig. 1 below)

The Idea Scores are highly correlated (r=0.80) across the two groups; the top 4 chocolate bar flavors are consistent across both groups: Cookie Dough, Nutella, Toffee Crunch, and Orange. The lowest performing chocolate bar ideas are also consistent across the two groups. The role of visuals in communicating unfamiliar concepts (Fig. 1 below)

The one key difference that emerged was a mid-tier flavor, Smores, which performed significantly better when presented with packaging visuals vs. text only. Our hypothesis is that awareness of smores is quite low in the UK. In this case, the image helped consumers to understand the flavor. When smores is removed from the data set, the overall correlation improves to r=0.90.

The risk of visuals miscommunicating familiar concepts (Fig. 2 below)

We also see that the fruit flavors consistently perform better when tested as text only vs.
packaging visuals (though the ranking remains consistent). Our theory here is that the tested graphics did not effectively communicate the flavor appeal of chocolate / fruit combinations. For example, the Fruity flavor has the largest gap in idea scores (+24 higher in the text-only test) between the two groups. While we can’t know why the packaging visuals underperformed the text-only representation, a viable hypothesis is that the packaging visuals placed too much focus on the fruit and not enough focus on the chocolate.

Conclusions

Testing text-only ideas vs. ideas supported by visuals yields similar results. This is great news for teams who want to make their innovation process more agile. It is usually possible to forgo visuals when testing innovation. In addition, draft visuals may undersell ideas, resulting in poor performance and uncertainty about how to improve the ideas. This may be especially true if you are testing draft visuals vs. in-market products that have fully developed visual languages. In this case, it’s best to represent your potential innovations and in-market products with text-only descriptions.

The exception to this is when testing truly new ideas with low familiarity. Here visuals may help consumers to understand what you are offering.

Our general recommendation is to test text descriptions if you don’t have market-ready visuals. A separate study can be used to test package designs for the strongest ideas.




Did this answer your question?