Artisan IMG > OpenAI (openai) (28aad8ce-5765-43fe-8b44-6ac6d98891c9)
Artisan IMG > OpenAI (openai) (28aad8ce-5765-43fe-8b44-6ac6d98891c9)

OpenAI
1.0

OpenAI aims to develop and direct artificial intelligence (AI) in ways that benefit humanity as a whole. They are most famous for creating ChatGPT.

Overview
Copy

OpenAI is an institute that aims to develop and direct artificial intelligence (AI) in ways that benefit humanity as a whole. They are most famous for creating ChatGPT.

It can generate, edit, and iterate with users on both creative and technical tasks. It can generate text and image based responses. Such as composing code snippets, testing frameworks, songs, screenplays, or even learning a user's writing style.

API Information
Copy

The Base URL used for the OpenAI connector is https://api.openai.com/v1. More information can be found on their main API documentation (v1.0) site.

Authentication
Copy

When using the Open AI connector for the first time, you need to create a new authentication.

Name your authentication and specify the type ('Personal' or 'Organizational').

The next page asks you for your API Key.

To get this field head to the OpenAI dashboard. Hover over the OpenAI icon in the top left corner. Select API Keys from the left panel.

Create a new API key using the Create new secret key button. Name your key and select suitable permissions.

Please make sure to save your API key securely as you won't be able to view it again.

Once you have added this field to your Tray.io authentication pop-up window click the Create authentication button. 

Go back to your settings authentication field (within the workflow builder properties panel), and select the recently added authentication from the available dropdown options . Your connector authentication setup is now complete.

Available Operations
Copy

The examples below show one or two of the available connector operations in use.

Please see the Full Operations Reference at the end of this page for details on all available operations for this connector.

Create embeddings (Available from v2.0)
Copy

The Create embeddings operation generates an embedding vector that represents input text.

The generated embeddings can be stored in any vector database and are valuable for performing similarity searches.

For example, the following workflow shows a query coming in through a Webhook and being sent to the OpenAI ‘ text-embedding-ada-002’ model.

The resulting vector has been passed as the 'vector' input to the next step (Retrieve docs matches) in the workflow to perform similarity search.

Create chat completion (Available from v2.0)
Copy

Creates a model response for the given chat conversation.

The following example demonstrates one of the use cases for the operation.

It shows a query coming in through a Webhook and being sent to OpenAI to create an embedding vector. The resulting vector is then passed to a vector database to perform a similarity search.

The Create chat completion operation uses the similarity search result to Create a model response for the received query.

The operation requires information for the following mandatory parameters:

  • Messages: A list of messages comprising the conversation so far. Every message

    • Role: The role of the messages author. For example, system, user, assistant, etc.

  • Model: ID of the model to use. You can choose from the available options.

  • Response format - Type: the format type of the response the model must return.

For more information refer to the OpenAI's Chat completion endpoint.

Create moderation (Available from v2.0)
Copy

The moderations endpoint is designed to assist in content moderation by helping identify and filter out content that may violate guidelines, contain offensive language, or be deemed inappropriate.

The example output below illustrates the categories within the moderation object.

If any of these categories are flagged during the operation, the results.flagged parameter is set to true.

This parameter can be utilized later to check and send an appropriate message response in case of malicious text or content violating guidelines.

Notes on using OpenAI
Copy

Formatting inconsistencies
Copy

The responses generated do not always come in the same format. This may occur even if you ask for the data to be structured the same within your prompt.

For example: "For feature 1, write a 1 sentence summary":

1
Feature 1: one sentence
1
Feature 1:
2
* one sentence
3
1
Feature 1:
2
1. one sentence
3

Model types
Copy

Each operation is based off one of OpenAI's Model types.

These 'neural network architecture' types serve different purposes depending on your specific use case.

For example; if you want OpenAI to artificially create an image based off your given prompt then the Model you would need to select would be DALL-E.

Whereas if you need to generate code or text then you would probably use a variant of the GPT-3.5 Model instead.

Most of the operations already have their respective Model types pre-selected for you. However some operations do still give users the ability to change the pre-selected Model type should they wish to do so.

In order to check or change the Model being used go to the properties panel. The option will either already be displayed or hidden behind the scenes under Show advanced properties.

For more information on what Models are capable of please see OpenAI's Model Overview API documentation page.

Tokens
Copy

Tokens can be thought of as counters for 'pieces of words'. The amount of data you want OpenAI to process is calculated through the use of them.

You can think of this feature as having the following basic principals:

  • The larger the dataset (you want OpenAI to iterate through), the greater the amount of Tokens you will need to use in order to process it.

    • The amount of Tokens you use to process your data is also dependant on the Model type being used.

Let's say the result of your OpenAI calculation is 'two records' returned from a potential list of fifty. You have to base the amount of Tokens you expect to use on the original list of fifty records. The original list is what OpenAI had to iterate through in order to get the end result and that is where the Token usage is calculated.

This is why we recommend you use OpenAI to auto-generate the code you need based on your natural language prompt. You can then use the generated code in a Script connector step without the Max token limitation applying.

The process mentioned above is outlined in greater detail in our Example Usage section below.

Temperature
Copy

Temperature is a parameter of OpenAI, ChatGPT, GPT-3 and GPT-4 models that governs the randomness - and thus the creativity - of the responses given.

'Temperature' is always a number between 0 and 1.

A temperature setting of around 0.5 is recommended for sentiment analysis. This ensures that the AI can correctly interpret the sentiment of the text and deliver the desired results.

Prompt warning
Copy

Lots of social native forms (such as Linkedin Lead Gen Forms / Facebook) dont allow pick-lists. Which means the list of return options can vary quite a lot.

Take the state of Texas for example. Here is a sample of the potential return values that could be made:

  • Field: State?

  • Value Entered: list of potential values include:

    • TX

    • TEXAS

    • Texass

    • texas

    • 67007

    • Dallas

You will need to figure out the best prompt to make sure you get something more specific to your use case.

For example something along the lines of ‘Convert the input to the known location name of a city, state or country’ would help generate fewer return values:

  • Value Entered: list of potential values include:

    • Dallas

    • Dallas, Texas

    • Dallas, Texas, US

Example Usage
Copy

Below is an example of a way in which you could potentially use the OpenAI connector to create a code based filter from entirely natural language input. This workflow responds dynamically to whatever natural language prompt you put in and updates the code base as a result.

With this workflow you will be able to create a Javascript function without needing to understand anything about how to code. You will also be able to filter through data sets of up to 6MB in size without having to worry about Max Token limitations.

The overall logic of the workflow is as follows:

  1. Create a natural language prompt based on what records and specifications you wish returned.

  2. Get the data set you want to filter through.

  3. Portion said data set into smaller manageable chunks.

  4. Make your OpenAI connector create a ChatGPT request based on the smaller 'chunked list'.

    • Basing your request on the 'chunked list' means fewer Tokens are necessary in order to generate the 'create code request'.

    • Asking for the Javascript function to be returned in a specific format and within the result itself means that it can be easily extracted and used later.

  5. Use the auto-generated code from the previous OpenAI request by applying it to another Javascript code filter (as outlined in the final Script step).

Step-by-step Explanation
Copy