Blog/Company
From buzzword to business Impact: Making AI work for insights

“And for those worried that AI marks the end of insights, fear not. As you’ll see, human intelligence, judgment, and oversight are more essential than ever.”
AI promises to revolutionise the insights industry – with more efficient workflows, deeper understanding, and sharper simulation. It all sounds so seductive. But for many teams, the reality feels more like the wild west.
New tools flood the market every week. Demos are dazzling. And everyone seems to be experimenting. Yet in conversations across the industry, one thing is clear: beneath the buzz, there’s real confusion. Where do we start? What should we invest in? And how do we know it’s actually working?
In my experience, the insights teams who are leading the charge on AI – like PepsiCo, Unilever, eBay, and Reckitt – have one thing in common: they start with the business tension point, not the technology.
They look for friction in their current workflow or things they wish they could do better, faster, or cheaper. And only then do they ask: how might AI help?
After 25 years in the insights industry – and the last few working in and with corporate insights teams to shape their AI strategy – I’ve decided to put together a practical, 3-step playbook to help Insights teams move AI from buzzword to business impact.
Download my AI strategy worksheet for insights teams
And for those worried that AI marks the end of insights, fear not. As you’ll see, human intelligence, judgment, and oversight are more essential than ever: in defining the right problems, fueling the inputs, interpreting the outputs, and transforming how insights are delivered and activated.
Because just like any other technology, the teams that win won’t necessarily be the ones with the flashiest tools. They’ll be the ones who successfully combine the best of human and machine – and embed AI into their ways of working to drive real, lasting transformation.
Step 1: Plan
It all starts with defining and prioritising the right problems to solve. AI is not a strategy in itself; it's simply a means to an end. And without a clear roadmap, you risk wasting time on shiny distractions that won't deliver meaningful or scalable impact.
Start by working with your teams and stakeholders to map the biggest tension points in your current workflows. That's exactly what we did at Mondelez: bringing together insights leaders, capability teams, agencies, and cross-functional partners from Marketing Excellence and IT to identify where AI could help solve real business problems.
We focused on three core questions:
- Where is time being lost to repetitive, frustrating, manual work?
- Where is data overload drowning out our ability to surface insights and opportunities?
- Where are we stuck in hindsight mode, when the business needs "so what" and "what now"?
In my experience – at Mondelez and with other corporate insights teams – AI is best suited to address three categories of problems:
- Streamline: freeing up time, money, and mental bandwidth by automating low-value, repetitive tasks
- Examples: Coding open ends, running sentiment analysis, summarising transcripts, drafting briefs or discussion guides, developing topline reports & dashboards, enabling chatbot querying of structured data or knowledge estates.
- In Action: Fox Sports is working with Focaldata AI to automate the end-to-end qualitative research process – from brief to debrief – freeing teams to focus on framing the problem, validating findings and landing insights into action.
- Surface: finding hidden patterns, insights, opportunity areas or starter ideas in your dataset
- Examples: Surfacing unmet needs from social/search data, combining datasets to flag emerging issues or potential drivers, generating opportunity platforms, identifying whitespace and creating concepts from existing research.
- In Action: Asahi Beverages and ONE Strategy are using AI to accelerate innovation development – going from data to opportunity areas to concepts in the space of a few days – so the insights team can spend less time on resight and research and more time on the so what.
- Simulate: anticipating how consumers or categories might respond before going to market
- Examples: modelling future category growth scenarios, predicting audience responses to new creative or innovation ideas, using synthetic data to enhance sample depth, leveraging digital twins to screen, stress-test & optimise ideas.
- In Action: Ogilvy has been trying out digital twins to pre-screen campaign ideas with different audiences. It doesn't necessarily replace testing, but it allows teams to iterate and optimise in real time and keep the "consumer in the room."
Once you’ve mapped potential use cases, it’s time to prioritise. You may have identified hundreds of possible directions, but as J&J recently observed, 10–15% of AI use cases will end up driving 80% of the value. That means you need to be ruthlessly focused in deciding where to begin.
“AI doesn’t succeed on autopilot. Like any strategic initiative, it requires a clear business problem, disciplined prioritisation, and a realistic understanding of what’s needed to succeed.”
The classic impact vs ease framework is especially useful here. It makes sense to start with high-ease, moderate-to-high impact areas to build early momentum and credibility. That said, it can be valuable to include one or two more complex use cases to start learning what it really takes to scale.
And throughout, don’t lose sight of the data and human infrastructure required to make AI successful. AI is only as good as the clarity of the brief, the quality of the data, and the oversight applied to its outputs. Too many pilots fall apart because teams underestimate the work involved in collecting, cleaning, and structuring the data – or overlook the validation checks needed to catch hallucinations, bias, or poor outputs.
AI doesn’t succeed on autopilot. Like any strategic initiative, it requires a clear business problem, disciplined prioritisation, and a realistic understanding of what’s needed to succeed.

Step 2: Pilot
With your roadmap in hand, it's time to move from planning to piloting: running focused experiments that deliver real value, accelerate learning and help you determine what's worth scaling.
The first decision is whether to build, buy, or borrow. There's increasing pressure on organisations to build internally, driven by concern about confidentiality and data protection, as well as the desire for competitive advantage. And if you're working with large volumes of proprietary data or developing a full ecosystem (e.g., to diagnose business performance), that may be the right path.
But in many cases, it's smarter to start by buying an off-the-shelf tool or borrowing an existing platform – especially when the use case is driven by external data or benchmarks, such as many AI-powered advertising, packaging or innovation screening tools like Kantar LINK AI or PRS AI Pack Screener. This allows for faster implementation, easier learning, and doesn't prevent you from bringing the solution in-house or behind your firewall later.
Once you've chosen to buy or borrow, the next step is to find the right partner. And that's about far more than shiny tech. You're looking for a partner who will help embed the tool into your workflows, not just sell you some flashy software. In my experience, the best AI partners excel across four dimensions:
- People: Will they support you beyond the tool? Do they understand your business context? Bring strong strategic thinking? Help drive adoption and customer success?
- Platform: Are their data sources and technology innovative, proven, and easy to integrate with your existing ecosystem?
- Privacy: Do they meet your organisation's data security and governance standards? Are they clear about where and how your data is stored and used?
- Potential to scale: Are they financially stable and invested in long-term partnership? Will they support onboarding, training, and ongoing change management?
Once you've selected your pilot partner – or chosen to build in-house – it's critical to define clear, measurable success criteria. This ensures you know what good looks like and gives you the evidence you'll need to justify scaling. I recommend tracking three things:
- Ease: Are people using the tool? Are they satisfied? Are they coming back?
- Efficiency: Are you saving time or money versus baseline?
- Effectiveness: Is the output as good – or indeed better – than what a human would deliver?
Ease and efficiency are usually straightforward to measure, since they come with “hard” metrics, like time or money saved, adoption or repeat rate. But effectiveness can be trickier, as it’s often more subjective – unless the output is tangible and testable, like an innovation concept. One partner I worked with set up a structured review process where insights team members compared AI-generated and human-generated outputs, scoring them on clarity, insightfulness, and actionability. Yes, it took effort – but it created a practical way to quantify effectiveness improvement and build confidence in the results.
And throughout all of this, the human remains essential: to shape the pilot brief, guide how the AI is used, interpret the output, champion adoption, and bring stakeholders along. Because no matter how good the tool is, it's your people who make it stick.
Step 3: Paradigm Shift
Once you’ve piloted and proven what works - and what doesn’t - the real transformation begins. It’s not just about scaling up the best use cases. It’s about reimagining your entire insights ecosystem with AI at its core.
It starts with a clear-eyed evaluation of what to stop, scale, or evolve.
Not every pilot will deliver. Some may prove redundant. Others may fall short or still be too nascent to operationalise. And that’s okay. As we saw with J&J’s AI journey, don’t hesitate to shut down what isn’t adding value. Treat every pilot as a learning opportunity: scale what works, iterate on what’s promising, and stop what doesn’t. This isn’t failure, it’s intelligent iteration. eBay’s AI insights team are masters of this: ruthlessly prioritising use cases, measuring impact, and evolving as they go.
Next – and this is the scary part – you need to move beyond installing shiny tools and start rethinking how insights actually gets done.
"We’re living through a generational shift in how knowledge is created, applied, and acted on."
In my experience, it’s not the teams with the flashiest tech that pull ahead. It’s the ones that embed AI into their everyday ways of working, and redesign their processes, platforms, and people to support it.
- Process: redesign your day-to-day workflows to make AI the default. How can you standardise AI-supported synthesis (e.g. for qual debriefs) and what repeatable requests (e.g. desk research, hypothesis generation) can AI own?
- Example: At Mondelēz, we piloted integrating DeepSights AI into every project brief, helping sharpen the focus and build off existing knowledge rather than starting from scratch.
- Platforms: reimagine the systems, tech stack, and infrastructure needed to embed AI seamlessly. How do you change your research commissioning systems to support AI integration? What new APIs, data feeds, data cleaning & curation is required?
- Example: I’m currently partnering with one client to map out the APIs, governance standards and data taxonomies required to support AI-generated content and measurement at scale.
- People: reimagine the culture, skills, and roles needed to make AI stick. What new skills (prompting, validating, interpreting AI outputs) or roles (e.g. AI product owners, insight engineers, research translators) and governance frameworks do you need?
- Example: One CPG client is now training their insights team on prompting and validating outputs, and building out new roles such as GenAI transformation leads and prompt specialists.
When done well, AI isn’t just a way to speed up what you already do, it’s a catalyst to transform the role of Insights. It can free up significant time and headspace by automating repeatable, lower-value tasks. It can fundamentally shift how research, synthesis and analysis are conducted and used. And it can elevate our deliverables from backward-looking reports to forward-looking, real-time recommendations.
But the human role is non-negotiable. AI doesn’t replace human intelligence, it amplifies it. For that to happen, processes and systems must be redesigned, and teams must be enabled and empowered to lead the change.
AI is an inflection point for Insights

This isn’t just a wave of new tools. It’s a moment of disruption – and opportunity – for the entire Insights industry.
We’re living through a generational shift in how knowledge is created, applied, and acted on. For insights teams, this is a chance to completely transform how we work: to streamline the heavy lifting, surface hidden insights and opportunities, and simulate the future. In short, to help Insights complete its shift from reactive service provider to forward-looking, strategic co-pilot.
The future won’t be shaped by machines alone – or by humans resisting change. It will belong to those who combine the best of human judgment with the power of AI.
So, start now:
- Plan the right problems to solve
- Pilot a few focused, high-impact use cases
- And once you've proven what works, prepare your team for a Paradigm Shift -reimagining how your team works, how insights flow, and how decisions get made with AI at the heart
AI isn’t just a technology transformation. It’s a transformation in how we think, operate, and win.
Let’s not miss it. Let’s own it. Let’s lead it.
Stay connected
Subscribe to get the Focaldata AI newsletter delivered directly to your inbox.