AI Prompt - Single shot LLM Classification/Prompting

AI Prompt - Single shot LLM Classification/Prompting

Please note that this may have licensing implications. Please discuss with your account manager if you are unsure if you have full access.

Overview

Release 1.9 introduces a structured API and Model to perform simple Single-shot Classification/Prompting using State-of-the-Art LLM Models.

Examples where these type of models can provide significant benefits:

  1. Zero Knowledge Classification - Typically, classification engines (such as Sofi) require data and examples to provide any form of classification. LLMs are excellent at providing zero knowledge classification based on their understanding of the base training data (typically billions of documents from all languages).

  2. Summarisation - LLMs are excellent at summarising large blocks of text or structured data. Single-shot classification could be used to provide:

    1. provide a summary of all the interactions of a call

    2. summarisation of a large request into a shorter by-line or Short description

    3. provide a concise summary of a knowledge article

  3. Prioritisation/Sentiment analysis - LLMs are good at understanding urgency or sentiment in correspondence

  4. Knowledge generation - LLMs can take information provided in Problem records and provide a starting point for a knowledge article.

AI Prompts

AI Prompts allow you to design and test single-shot classification prompts.

 

AIprompt1.png
System AI Prompt Form

 

Fields

Field

Description

Field

Description

Name

Display value for the AI Prompt

Key

Lookup value for the AI Prompt. Used from the AI Prompt API to initiate the Prompt from an action.

LLM Model

Link to an LLM Model configuration. Contains the information on the provider, model, and authentication tokens to be used. Can be changed to quickly test the results from different providers and models.

Description

Description/documentation for the Prompt

User prompt

This is the main user/action prompt presented to the LLM. This is where the actions you are asking the LLM to take are laid out and explained.

This field is a Template, allowing you to provide information from associated records. You can use the ‘Pre-processing Script’ to retrieve information required for the template.

System prompt

The System Prompt refers to the initial input or instruction given to the model to generate a response. This prompt sets the direction and parameters for the model's output. Typically, in the system prompt you provide the model with any instructions on:

  • Direction: The prompt guides the LLM in generating content that is relevant to the user's request. It directs the focus of the model's response.

  • Context: It provides the necessary context that influences the tone, style, and content of the model's output. For instance, a prompt asking for a technical explanation will yield a different style of response than a prompt asking for a story or a joke.

  • Limits: The prompt can also set boundaries on the scope of the response, limiting what information should be included or highlighting specific details that need emphasis.

Pre-processing Script

Provides the ability to retrieve information from Servicely and include it in the request to the LLM. This could include information from related records, classifications, groups, etc.

Script Context

context: Object - Key value pair of the context passed in from the initiating script. Determined by the developer.

options: Object - Key value pair representing options that can be set on the models.

promptRec: TableRecord - The SystemAIPrompt tableRecord

evaluationPhase: string - The evaluation phase [evaluate|test]. Useful for performing different logic depending on whether we are executing the model or just testing.

Post-processing Script

Provides the ability to process the response from the LLM before returning it to the AI Prompt API Call.

Script Context

context: Object - Key value pair of the context passed in from the initiating script. Determined by the developer.

promptRec: TableRecord - The SystemAIPrompt tableRecord

evaluationPhase: string - The evaluation phase [evaluate|test]. Useful for performing different logic depending on whether we are executing the model or just testing.

Test Setup Script

Provides a mechanism to specify context for the ‘Test’ buttons on the AI Prompt form. The ‘Test prompts only’ and ‘Test Model’ buttons allow you to rapidly iterate over the authoring of the Prompt, and the ‘Test Setup Script’ allows you to set the test environment for them.

Testing

‘Test prompts only’ button

The ‘Test prompts only’ button allows you to test the System and UserPrompts before sending them to the LLM. This includes loading any content in the Pre-processing Script. The text for the prompts will be displayed, and can be copied to the Clipboard using the ‘Copy’ button.

 

aiprompt2.png
Example of the ‘Test prompts only’ button

 

‘Test Model’ button

The ‘Test Model’ button formats the request and sends it to the LLM, and then displays the results along with information on the model, latency, token usage, cost, and result.

 

aiprompt3.png
Example of the ‘Test Model’ button

System LLM Models

The SystemLLMModels table contains information on the available LLM Models to use. The initial 1.9 release provides support for OpenAI and Anthropic, and provides the current set of available models from those providers. More providers will be implemented in upcoming releases.

 

aiprompt4.png

 

aiprompt5.png

System LLM Usage

The System LLM Usage table tracks the model, provider, token and cost usage of the LLM providers.

 

aiprompt6.png

 

Initiating from the API

Function

Description

Function

Description

AI.processAIPrompt(aiPromptKey: string, context: Object): any

Executes the SystemAiPrompt and returns the value returned by the Post-processing Script.

aiPromptKey - The unique ‘Key’ field of the AI Prompt

context - A key/value pair containing any context that is to be passed to the Pre-processing Script.

 

aiprompt7.png
Example of the SystemAIPrompt being called from a script

 

Servicely Documentation