LLMs often rely on text-based interactions, which can resemble command-line interfaces. It’s like going back in time to text conversations with a machine, the time when MS-DOS was the prominent user interface of a personal computer. LLMs rely on users to provide the correct prompt including the right context in order to get the desired results. Helping end-users provide the correct prompt is crucial for retrieving the best output from LLMs. Can the LLM suggest better prompts or understand the prompt in a better way?
Maintaining context over long interactions is difficult for LLMs. Therefore, it is important to understand the context of your business and the workflow of your end-users. Taking that into account during the development, you can create an LLM that caters to specific workflows and contexts. This makes it easier for end-users to create the right prompt for higher-quality results as the LLM already understands the context.
Another important aspect of the user interface for LLMs is the output. Currently, most LLM’s focus on either text-based or visual output. It does not combine text-based with visual output, which can be more easy to interpret. In enterprise use cases, where data analysis is the main goal, it is important to pay special attention to the output of LLMs. Understand how your users are using the output. Instead of delivering text-based output, it is often better to provide visualizations of the data, as they are more easy to interpret. Our HDI consultants can help understanding the user workflow to define the correct output of the LLM for your user and business needs.
