Articles on: Tips and Tricks

Using ChatGPT in a Flow

We are exploring exciting ways to incorporate ChatGPT into TextIt, but in the meantime you can already start using it with a Webhook action. Here is a step-by-step guide to help you get started.

Step 1: Get an API Key for OpenAI


To use ChatGPT in TextIt, you will first need to obtain an API key. You can do this by visiting the OpenAI website and following the instructions provided. Once you have your API key, make sure to keep it safe, as you'll need it later.

Step 2: Create a Webhook action in TextIt


Next, create a new webhook action in your TextIt flow. You can do this by selecting the "Call a webhook" action type from the list of available actions, and then filling in the necessary details.

Select POST for the method. For the URL, you'll need to use the API endpoint provided by OpenAI for their GPT-3.5 model. This URL should look something like https://api.openai.com/v1/chat/completions.



Step 3: Set up the webhook headers


In order to authenticate your request with OpenAI, you'll need to include your API key in the headers of the webhook request. To do this, click on the "Headers" tab in the webhook action settings and add a new header with the key "Authorization" and the value "Bearer [YOUR API KEY]". Make sure to replace "[YOUR API KEY]" with your actual API key.

Step 4: Set up the webhook request body


Next, you'll need to set up the request body for the webhook. This will include the text that you want to send to ChatGPT for processing. To do this, click on the "POST Body" tab in the webhook action. Then, enter the following JSON code in the text box:

@(json(object(
  "model", "gpt-3.5-turbo", 
  "messages",array(object("role", "user", "content", results.prompt.value)),
  "temperature", 0
)))


This is tells ChatGPT to respond to a prompt that is stored in the flow result variable @results.prompt.value. You can name this whatever you like, so long as it matches a result name you've already captured in the flow. In this example it is using the model gpt-3.5-turbo. The models available vary based on what your API key has access too and will likely have new ones added as OpenAI trains new ones. Temperature is a value between 0 and 1 for how creative you want it to be. There are other parameters in the API you can play with, like max_tokens, but there are better ways to limit the response size, see the tips below.

Step 5: Use the results


After the webhook is successful, you should be able to access the response from ChatGPT using an expression variable. For example, to send a message with the response from the webhook, you could add @webhook.choices.0.message.content to a Send Message action.


Tips



And that's really all there is to it! Here are some ideas that can make using ChatGPT in your flows even easier.

Keep the responses short. Since we are chatbot, we want to avoid getting back a wall of text. This is where things get pretty wild. ChatGPT is in a way a natural language API. Instead of sending a variable like max_tokens try just prefixing your prompt with instructions keep its responses under a few sentences.

Store your OpenAI key as a global variable that you can access in many different Webhook actions using @global.openai_key, or whatever you choose.

Create a flow that calls ChatGPT but uses @parent.results.prompt as the input. This allows you to use the Enter a flow action anywhere you'd like ChatGPT's help by just filling out a 'prompt' parameter.

Get creative. You can ask ChatGPT to categorize a message by prompting it to reply with a short category name describing the issue to help identify support issues and route people to automated flows or open tickets to escalate to the right people. The ways you can leverage it to improve your flows are boundless.

Updated on: 20/04/2023

Was this article helpful?

Share your feedback

Cancel

Thank you!