AI at Tyson Foods
The messy reality of AI adoption in the corporate world.

Foreword
In late 2024, I had the privilege of being involved in the first version of AskDEB — Tyson's internal generative AI application. However, the results weren't great: daily usage rates were very low. In early 2025, six months after the first release, we decided to revisit AskDEB, and thus began a journey of discovering the messy reality of AI adoption in the corporate world.
This is more of a snapshot than a case study. It doesn't have an ending yet — only stories to tell about what's currently happening, what went wrong, and how we can do better.
A 90-Year-Old Giant Catching Up with AI
The story began when everyone started talking about AI. Although Tyson Foods, a 90-year-old food company, was never a pioneer in technology, it didn't want to be left behind. But how do we catch up? How do we use AI to power the business?
There were many options on the table. We could subscribe to popular platforms such as ChatGPT or Gemini, or subscribe to built-in AI features in software applications such as SAP and ServiceNow. But subscriptions are expensive, and it might also be risky to lock into one platform or one AI model.
With cost in mind, the IT team decided to test the waters by building our own AI interface: AskDEB. It connects APIs of popular AI models while enabling secure access to internal data and tools. This way, Tyson Foods only pays based on total usage — maximum flexibility, minimum risk.
On paper, it was brilliant.
The Reality Check
However, after putting AskDEB out there for 6 months, the usage data wasn't looking good. Tyson employees had fallen into two extremes:
The silent majority: Daily active usage was low. Most employees weren't using it at all, or if they were, it wasn't frequent.
The power users: A small group of pioneers had not only adopted AskDEB but were asking for advanced features such as multi-agent systems and custom workflows.
The Research Begins
So the team decided to revisit AsKDEB entirely. We wanted to understand the current landscape — how people were actually using AskDEB, what was working, and what wasn't. Our research process has three phases:
- Qualitative research: Interview users from different departments to understand how they use AskDEB and gather initial insights
- Quantitative validation: Test our findings with large-scale surveys
- Follow-up interviews: Return to users with targeted questions to validate and expand on survey results"
The Success Stories: AI Finding Its Place
To our surprise, the initial research revealed that although overall adoption rates were very low, quite a few pioneer users had already heavily integrated AskDEB into their daily workflows. Here are three stories that stood out during our interviews:
- The creative chef: A chef from R&D leveraged AskDEB's creativity and image generation capabilities to compress new recipe brainstorming sessions from days to hours.
- The global IT support lead: An IT manager built a multilingual chatbot that answers questions that involves complex internal processes.
- The Meta-AI users: Perhaps most interestingly, some users used AskDEB to improve how they use AI — writing better prompts, building custom agents and etc,. AI helps people get better at AI.
These weren't just success stories; they were proof of natural adoption when AI met real business needs.
The Trust Gap: Same API, Different Feelings
Despite these success stories, most users admitted that they believed ChatGPT had better results than AskDEB. This puzzled us — our developers confirmed we were using the same OpenAI APIs. The only thing that might go wrong was the UI layer. Small details — sluggish loading states, unpolished markdown format — subconsciously affected how users felt about AskDEB.
This was one of our major lessons: craft and taste matter. Even if the underlying API is the same, the UI made users feel differently.
When Features Don’t Make Sense
Another insight we had was: users almost never changed AI models. We’d added this feature assuming people would want to switch between different models or versions such as GPT-4o or Gemini Pro.
In readily, users didn’t care until the result were bad. Even then, when they decided to try something else, model names meant nothing to them. They only wanted to know which model might work better for their current tasks.
This led to a quick design pivot: instead of displaying model names, we displayed task categories that actually made sense to users. The lesson: the features should make sense to users, not just to developers. What seems obvious to developers or AI pros often means nothing to the people actually using the product.
[fig 1] Before (left): displaying model names; After (right): displaying task categories that actually make sense to users.
Building for Today or Tomorrow?
But this raised a deeper question: do we really need model selection at all?
Today, different models excel at different tasks. But what happens when a single model handles everything well? Do we keep building features, or bet on AI improving and focus on simpler experiences?
The same question applies to many other features that the team is currently developing: do we really need custom agents, or maybe someday AI models will be good at everything?
Or... maybe someday AI is so great that we don't even need an app, or an interface anymore?
The Path Forward: Embracing the Messy Reality
Here's the uncomfortable truth: we don't know. We don't know whether the next great AI model will be out next week, we don't know what the relationship will look like between human and AI in the near future.
The only thing we know is how to do research, find problems, build solutions, and iterate. It's messy and unpredictable. That's the nature of design. It's a series of experiments, failures, and maybe surprising successes.
This is why UX is valuable. As tools evolve, users evolve too. Stay grounded in research, make the best decisions with what we have today, even if we're proven wrong tomorrow.