Effectively designing for Generative AI and LLM

As Artificial Intelligence continues to evolve, many companies are exploring how to integrate Generative AI into their software solutions. However, many companies struggle to find clear use cases due to unknown accuracy. The accuracy of AI highly depends on the human input, i.e. the ability to write good prompts. Often projects fail as the implementation does not fit the user and enterprise context or the drawbacks of generative AI are not taken into account.

At Bright Cape, our Human Data Interaction (HDI) consultants can help you create value by leveraging the full potential of AI in your enterprise software. In this blog post, we focus on overcoming challenges in designing Large Language Models (LLM) for enterprise solutions.

What is a Large Language Model (LLM)?

A large language model (LLM) is a type of generative AI that processes and generates human-like text. It uses algorithms to understand and predict language patterns, allowing it to produce coherent and contextually relevant responses. LLMs are trained on diverse datasets and work by analyzing the input text and generating outputs that align with the patterns learned during training. One commonly known example of a LLM is ChatGPT.

Understand the enterprise context

Many LLM implementations in enterprise software fail to be adopted as they do not fit with the specific context and needs of the users. Enterprise software must cater to complex workflows and multiple user groups, each with its own requirements. Therefore, the first step in designing LLMs is understanding the specific context and needs of those groups. Many companies are creating their own generic LLM like ChatGPT but are not yet focusing on the context and need of the user to specify for a catered workflow. It is important to ensure that the LLM integration aligns with your business objectives and user expectations.

In this process, make sure to prioritize the human element in every design decision. Consider not only the users who interact directly with the LLM but also other stakeholders who might be affected by its deployment or outcomes. Focus on the desired outcomes and define what success looks like to create solutions that deliver real value.

Optimize the User Interface

LLMs often rely on text-based interactions, which can resemble command-line interfaces. It’s like going back in time to text conversations with a machine, the time when MS-DOS was the prominent user interface of a personal computer. LLMs rely on users to provide the correct prompt including the right context in order to get the desired results. Helping end-users provide the correct prompt is crucial for retrieving the best output from LLMs. Can the LLM suggest better prompts or understand the prompt in a better way?

Maintaining context over long interactions is difficult for LLMs. Therefore, it is important to understand the context of your business and the workflow of your end-users. Taking that into account during the development, you can create an LLM that caters to specific workflows and contexts. This makes it easier for end-users to create the right prompt for higher-quality results as the LLM already understands the context.

Another important aspect of the user interface for LLMs is the output. Currently, most LLM’s focus on either text-based or visual output. It does not combine text-based with visual output, which can be more easy to interpret. In enterprise use cases, where data analysis is the main goal, it is important to pay special attention to the output of LLMs. Understand how your users are using the output. Instead of delivering text-based output, it is often better to provide visualizations of the data, as they are more easy to interpret. Our HDI consultants can help understanding the user workflow to define the correct output of the LLM for your user and business needs.

 

Understand the drawbacks of LLM

One main challenge of using LLM within your business is the accuracy of the output. Often it is said that the output is never as good as what an employee can produce manually. Again, by understanding the workflow of your end-users you can still design a solution that can overcome the drawbacks of the LLM.

Let’s look at the following example; You are developing a solution that helps UX researchers or product teams by analyzing interview data. The accuracy of the output of the LLM might be lower than UX researchers can deliver, but it might still be a valuable solution when considering the workflow. For example, you have a product development team that relies on input from business stakeholders and limited UX resources on what features they have to deliver. How much does speed matter, such as the time to insight and the time to put that insight into a product decision, in their workflow? The goal is to deliver features that increase value for end users. Weigh the potential value gained from including user insights in product decisions against the risk of releasing non-valuable features due to incorrect insights. Additionally, consider how easy it is to undo a product decision if the outcome is negative. This example highlights the importance of looking at the workflow of a Product Owner and the UX Researcher after releasing the LLM solution.

We, as human beings, tend to trust AI results immediately, even though they might not always be a 100% correct. Keeping that human aspect and control in the loop can create value even when the output of the LLM is not always correct.

Supporting you in delivering valuable AI and LLM solutions

Integrating LLMs into enterprise software is challenging. Understanding the enterprise context, the end-user workflow, and the business goal is key when you are developing LLM solutions. When you understand this, you can design the best user interface, recognize the drawbacks of the solution and be successful in delivering LLM solutions that deliver value to your company.

Our HDI consultants can help you focus on understanding the desired outcomes for both the business and the end-users. We do this by conducting thorough UX research to understand your business, end-users, and their workflows, translating these insights into concrete possibilities and designs for LLM integration within your business.

Looking for support to integrate LLMs into your enterprise software? Learn more about our approach to Human Data Interaction here of how we can help.