OpenAI aims to develop and direct artificial intelligence (AI) in ways that benefit humanity as a whole. They are most famous for creating ChatGPT.
OpenAI is an institute that aims to develop and direct artificial intelligence (AI) in ways that benefit humanity as a whole. They are most famous for creating ChatGPT.
It can generate, edit, and iterate with users on both creative and technical tasks. It can generate text and image based responses. Such as composing code snippets, testing frameworks, songs, screenplays, or even learning a user's writing style.
The Base URL used for the OpenAI connector is https://api.openai.com/v1. More information can be found on their main API documentation (v1.0) site.
Within the builder, click on the OpenAI connector to display the connector properties panel. Select the Auth tab and click on the New authentication button.
In the Tray.io authentication pop-up modal name your authentication in a way that will quickly identify it within a potentially large list. For example whether it is a Sandbox or Production auth, etc.
Consider who/ how many people will need access to this authentication when choosing where to create this authentication ('Personal' vs 'Organisational').
The next page asks you for your API Key credentials.
To get this field head to the OpenAI dashboard. Click on the Personal button in the top right corner next to your user name icon.
Select the View API Keys option.
In most cases you will need to create a new API key. This is because the key itself is only viewable once, when created.
Please make sure to store / copy your API Key somewhere safe so you can paste it into the Tray.io authentication model later on.
Once you have added this field to your Tray.io authentication pop-up window click the Create authentication button.
Your connector authentication setup should now be complete. Please run the simplest operation available to test and make sure you can retrieve data as expected.
The examples below show one or two of the available connector operations in use.
Please see the Full Operations Reference at the end of this page for details on all available operations for this connector.
Notes on using OpenAICopy
The responses generated do not always come in the same format. This may occur even if you ask for the data to be structured the same within your prompt.
For example: "For feature 1, write a 1 sentence summary":
1Feature 1: one sentence
1Feature 1:2* one sentence3
1Feature 1:21. one sentence3
Each operation is based off one of OpenAI's Model types.
These 'neural network architecture' types serve different purposes depending on your specific use case.
For example; if you want OpenAI to artificially create an image based off your given prompt then the Model you would need to select would be DALL-E.
Whereas if you need to generate code or text then you would probably use a variant of the GPT-3.5 Model instead.
Most of the operations already have their respective Model types pre-selected for you. However some operations do still give users the ability to change the pre-selected Model type should they wish to do so.
In order to check or change the Model being used go to the properties panel. The option will either already be displayed or hidden behind the scenes under Show advanced properties.
For more information on what Models are capable of please see OpenAI's Model Overview API documentation page.
Tokens can be thought of as counters for 'pieces of words'. The amount of data you want OpenAI to process is calculated through the use of them.
You can think of this feature as having the following basic principals:
The larger the dataset (you want OpenAI to iterate through), the greater the amount of Tokens you will need to use in order to process it.
The amount of Tokens you use to process your data is also dependant on the Model type being used.
Let's say the result of your OpenAI calculation is 'two records' returned from a potential list of fifty. You have to base the amount of Tokens you expect to use on the original list of fifty records. The original list is what OpenAI had to iterate through in order to get the end result and that is where the Token usage is calculated.
This is why we recommend you use OpenAI to auto-generate the code you need based on your natural language prompt. You can then use the generated code in a Script connector step without the Max token limitation applying.
The process mentioned above is outlined in greater detail in our Example Usage section below.
Temperature is a parameter of OpenAI, ChatGPT, GPT-3 and GPT-4 models that governs the randomness - and thus the creativity - of the responses given.
'Temperature' is always a number between 0 and 1.
A temperature setting of around 0.5 is recommended for sentiment analysis. This ensures that the AI can correctly interpret the sentiment of the text and deliver the desired results.
Lots of social native forms (such as Linkedin Lead Gen Forms / Facebook) dont allow pick-lists. Which means the list of return options can vary quite a lot.
Take the state of Texas for example. Here is a sample of the potential return values that could be made:
Value Entered: list of potential values include:
You will need to figure out the best prompt to make sure you get something more specific to your use case.
For example something along the lines of
‘Convert the input to the known location name of a city, state or country’ would help generate fewer return values:
Value Entered: list of potential values include:
Dallas, Texas, US
Below is an example of a way in which you could potentially use the OpenAI connector to create a code based filter from entirely natural language input. This workflow responds dynamically to whatever natural language prompt you put in and updates the code base as a result.
The overall logic of the workflow is as follows:
Create a natural language prompt based on what records and specifications you wish returned.
Get the data set you want to filter through.
Portion said data set into smaller manageable chunks.
Make your OpenAI connector create a ChatGPT request based on the smaller 'chunked list'.
Basing your request on the 'chunked list' means fewer Tokens are necessary in order to generate the 'create code request'.