AskDEB
Adapting generative AI tool for Tyson’s internal employees.

Setting the Stage
What is AskDEB?
Under the hood
AskDEB (DEB: Digital Enablement Bot) is Tyson’s proprietary generative AI tool. It connects APIs of popular LLM (Large Language Model), such as ChatGPT, Gemini, and Claude while enabling secure access to internal data and tools. The UX team was brought in early to ensure the tool was designed to meet user needs, focusing on crafting an interface that enables Tyson employees to use AI effectively and responsibly.
On the surface
AskDEB features a conversational UI similar to most popular generative AI applications (e.g., ChatGPT, Claude) but styled with Tyson’s brand image. We also developed a new “Custom GPT” feature where team members can create, test, and share their own GPTs tailored to their daily tasks or team-specific needs.
Why AskDEB?
The main reason for developing AskDEB is to save costs. Existing products like ChatGPT, Copilot, or Gemini, require paying for seats for all employees - 130,000 Tyson employees in total, or 30,000 seats if limited to corporate roles. In contrast, building an internal platform allows the company to pay only based on total usage, making it a far more cost-efficient solution for the time being.
The Challenge
Frankly speaking, generative AI is a relatively new concept for all of us - users, developers, and designers. Everyone is still learning how to interact with those LLMs.
While the “chatbot” interface seems to be successful for ChatGPT, there are several challenges when it comes to enabling free-form conversations for general users:
- Lack of Affordance: A simple chat interface with a large input box labeled “Ask anything…” provides little direction on how to interact with the tool or what it is capable of doing. This lack of affordance can leave users unsure of how to get the most out of the experience.
- Misunderstanding: Many users are unfamiliar with what an LLM truly is—how it works, its limitations, and how to use it effectively. This can lead to misunderstandings and unrealistic expectations, such as the belief that “AI can do everything” or that it will behave like a human.
- Trust: Closely tied to misunderstanding is the issue of trust. When users have overly high expectations of what AI can achieve, they may lose trust in the tool if the output falls short of those expectations.
Exploration & Research
Given the novelty of the problem space, we conducted a combination of literature reviews and user research to inform our design decisions.
Literature review
We began by reviewing state-of-the-art academic articles on generative AI applications, following are some of the high-level insights:
- System-Driven vs. User-Driven: Striking a balance between system automation and user control is key. The more system-driven and automated, the more trustworthy and high-quality the output would be. However, users also need the flexibility to tailor the model’s behavior to their specific needs.
- Abstraction Layer: Most users are not prompt experts. There should be a way to abstract the prompt in a way that the user can easily interact with the AI system with less mental burden to come up with appropriate prompts.
- Validation: The system should minimize the need for users to validate every output. The more the user needs to validate everything that the machine produces, the less the machine is actually providing any value.
- Performance: Latency is critical. Interactions should feel instantaneous (under 100ms) to ensure a smooth and engaging user experience.
User Research
We conducted direct user research alongside the literature review by engaging with Tyson’s internal employees. This process revealed two key challenges that needed careful consideration during the design phase:
- Terminologies: Terminology and semantics play a critical role when designing generative AI applications, especially for new tools that may lack established norms. Since generative AI is a relatively new field, terms like "Chat," "Bot," and "Tools" can carry different meanings depending on the context, which can lead to confusion.
- Rethinking Traditional UX Processes: The conventional "UX process" of creating a structured user flow does not readily apply to conversational UI design. Unlike traditional applications, where user actions follow a predictable flow, interactions in conversational UIs are transient and dynamic, making it challenging to define a fixed "flow."
Design Highlights
Case Study 1: exploring “tools” for general users
What is a “tool”?
A “tool” is the customized code that does the “extra work” that cannot be done by the LLM alone.
For example, developers may create a tool to retrieve data from an internal application through an API. The LLM determines when to use the tool, which then provides text-based output for the LLM to process and generate a final result for the user.
How should we present the tools to the users?
The behind-the-scenes mechanism of tools can be too much for the user to comprehend. Therefore, it is up to the UI design to let the user understand what a “tool” is and how to use a tool.
Below are some of the explorative wireframes showcasing ways to present tools to users:
Case Study 2: Assistant creation for pro users
What is an "Assistant"?
Assistant, or Custom GPT refers to a preset configuration tailored for specific tasks. This allows users to save time by avoiding repetitive instructions.
For example, if a user frequently performs a writing task with a specific format, they can create a "preset" prompt and save it as an Assistant. This eliminates the need to re-enter the same instructions every time they want the generative AI to complete the task.
How should we enable the user to build their custom GPTs?
When designing for Custom GPT creation, the target audience shifts from general users to pro users—individuals who are more experienced with digital applications and generative AI tools. As a result, the focus moves away from simplifying concepts (as with "tools") and toward enhancing functionality and efficiency to meet the advanced needs of these "Custom GPT crafters." Below are some of the explorative wireframes showcasing the workflow of creating assistants:
Testing and comparing the Assistant with the default GPT side by side.
Hide & show the preview window while creating the Assistant.
Result
As this is still a developing area, our primary goal is to help our internal employees gradually adopt this new tool. As we move forward, we anticipate the need for additional features and continuous improvements. For now, however, our main focus remains on user adoption and education.
Number of users since the release of AskDEB, in the year 2024.