Skip to main content

OpenAI (ChatGPT, Whisper, DALL-E)

With OpenAI (ChatGPT, Whisper, DALL-E) modules in Make, you can create chat or prompt completions, edits, and moderations, generate images, manage files and batches, and call APIs.

To get started with OpenAI, you need an OpenAI account.

Note

Make modules support GPT-3.5, GPT-4, GPT-4o, and GPT-4o mini, and o1 models, provided the user has access.

Connect OpenAI to Make

To connect OpenAI to Make, you must obtain an API Key and Organization ID from your account.

Important

To use this app, you must have credit in your OpenAI account. You can buy credits on the OpenAI billing page.

  1. Log in to your OpenAI (ChatGPT, Whisper, DALL-E) account.

  2. Click on your profile icon in the top right corner > View API Keys.

    OpenAI1.png
  3. Click Create new secret key, give the key a name (optional), and click Create secret key.

  4. Copy the secret key, store it in a safe place as you will not be able to view it again, and click Done.

  5. Go to your account settings page and copy your Organization ID to your clipboard.

    openai-2.png
  6. Log in to your Make account, add an OpenAI (ChatGPT, Whisper, DALL-E) module to a scenario, and click Create a connection.

  7. Optional: In the Connection name field, enter a name for the connection.

  8. In the API Key field, enter the secret key copied in step 4.

  9. In the Organization ID field, enter the organization ID copied in step 5 and click Save.

You have successfully established the connection. You can now edit your scenario and add more OpenAI (ChatGPT, Whisper, DALL-E) modules. If your connection needs reauthorization at any point, follow the connection renewal steps here.

Triggers

Triggers when a batch is completed

Connection

Establish a connection to your OpenAI account.

Limit

Enter the maximum number of results to be worked with during one execution cycle.

AI

Send messages to a specified or newly created thread and execute it seamlessly. This action can send the arguments for your function calls to the specified URLs (POST HTTP method only). Works with Assistants v2.

Connection

Establish a connection to your OpenAI account.

Assistant

Select or map the assistant you would like to use.

Role

Indicate whether to send a message on behalf of the user or the assistant.

Message

Enter the message to send to the Assistant.

Image Files

Select images you want to include.

Map or select the binary data of the image. You can retrieve the binary data of an image using the HTTP: Get a file module, or another app such as Dropbox.

Images are only supported on certain models. For more information, see OpenAI Vision-compatible models.

Image URLs

Add images you want to include.

Enter the URL address to a public resource of the image. For example, https://getmyimage.com/myimage.png.

Images are only supported on certain models. For more information, see OpenAI Vision-compatible models.

Thread ID

Enter the Thread ID where the message will be stored.

To find your Thread ID, go to the OpenAI Playground, open your assistant, and the Thread ID will be visible.

If Thread ID is left empty, we will create a new thread. You can find the new thread's Thread ID value in the module's response.

Tool Choice

Select the tool that is called by a model. The options are:

  • None: the model will not call any tool and instead generates a message.

  • Auto: the model can pick between generating a message or calling one or more tools.

  • Required: the model must call one or more tools.

  • Specific tool's name.

Model

Select or map the model you want to use.

Note

Ensure you have access to the model you want to work with. Only models available for the account are listed for selection.

Tools

Specify tools in order to override the tools the assistant can use when generating the response.

File Source Resources

Select the vector store that will become available to the File Search tool in this thread.

Code Interpreter Resources

Select the files that will become available to the Code Interpreter tool in this thread.

Instructions

Enter instructions in order to override the default system message of the assistant when generating the response.

Max Prompt Tokens

The maximum number of tokens to use in the prompts (input including files). If the run exceeds the number of prompt tokens specified, the run will end with the Incomplete status. Refer to Open AI Platform documentation to learn more.

Max Completion Tokens

The maximum number of tokens to use in the completion (output). If the run exceeds the number of prompt tokens specified, the run will end with the Incomplete status.

Note

The o1 models require tokens for both output and reason. Insert the sum of both tokens if you use the o1 models.

Refer to Open AI Platform documentation to learn more.

Temperature

Specify the sampling temperature to use.

Higher temperatures generate more diverse and creative responses. For example, 0.8. Lower temperatures generate more focused and well-defined responses. For example, 0.2. The default value is 1.

The value must be lower than or equal to 2.

Top P

Specify the Top P value to use nucleus sampling. This will consider the results of the tokens with probability mass. The default value is 1.

The value must be lower than or equal to 1.

Response Format

Select the format in which the response will be returned.

Parse JSON Response

If you selected JSON Object for the Response Format, you can choose whether or not the response will be parsed.

Truncation Strategy

Select the truncation strategy for the response.

Creates a completion for the provided prompt or chat.

See the OpenAI model endpoint compatibility section for the list of supported models.

Connection

Establish a connection to your OpenAI account.

Select Method

Select a method to create a completion.

Model

Select or map the model you want to use:

  • o1

  • GPT-4o

  • GPT-4 Turbo

  • GPT-4

  • GPT-3.5 Turbo

Note

Ensure you have access to the model you want to work with. Only models available for the account are listed for selection.

Messages

Add the messages for which you want to create the completion by selecting the Role and entering the Message Content.

For some models you can manage image input type and image details as well as audio filename and audio data.

For more information about chat completion, refer to the OpenAI documentation.

Max Completion Tokens

The maximum number of tokens to generate in the completion. The default value is 2048.

Note

Low values may cause the output to be truncated. High values may use a lot of OpenAI credit.

The o1 models require tokens for both output and reason. Insert the sum of both tokens if you use the o1 models.

Refer to OpenAI Platform documentation to lear more about reasoning modules.

Temperature

Specify the sampling temperature to use.

A higher value means the model will take more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. Defaults to 1.

Top P

An alternative to sampling with temperature is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Defaults to 1.

Number

The number of completions to generate for each prompt. Defaults to 1.

Frequency Penalty

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. This value must be a number between -2 and 2.

Presence Penalty

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. This value must be a number between -2 and 2.

Token Probability

Add the token probability by selecting the Token and Probability. The Probability must be a number between -100 and 100.

Response Format

Choose the format for the response.

Audio Output Options

If you selected the GPT-4o: gpt-4o-audio-preview model and Audio + text response format, you need to indicated a voice and an audio format in the Voice and File Format fields accordingly.

Parse JSON Response

If you selected JSON Object for the Response Format, you can choose whether or not the response will be parsed.

Seed

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Refer to the system_fingerprint response parameter to monitor changes in the backend.

For more information, refer to the OpenAI documentation.

Stop Sequences

Add up to 4 sequences where the API will stop generating further tokens.

Other Input Parameters

Add any additional input parameters by selecting the Parameter Name , the Input Type, and entering the Parameter Value.

For more information, refer to the OpenAI documentation.

Identifies specified information in a prompt's raw text and returns it as structured data.

Connection

Establish a connection to your OpenAI account.

Model

Select or map the model you want to use.

Note

Ensure you have access to the model you want to work with. Only models available for the account are listed for selection.

Text to Parse

Enter the text containing the data that you want to transform.

Prompt

Enter a short description explaining what type of data should be extracted from the text entered above.

Structured Data Definition

Enter parameters or map the way in which the structured data should be returned.

Parameter Name

Enter a name for the parameter. Adhere to the following rules:

1. The first character must be either a letter or an underscore.

2. Do not include any commas or spaces.

3. Do not use any special symbols, other than an underscore.

Description

Enter a short description of the parameter.

This description is part of the prompt, so the more clear your description, the more accurate the response.

For example, if would like OpenAI to search for coordinates in the text, you can enter Latitude and Longitude of the location.

Data Type

Select or map the data type format in which the parameter is returned.

If there will be multiple occurrences of similar data types, select an Array option. For example, if there are coordinates of multiple locations to be returned, select Array (text) .

Value Examples

Enter or map examples of possible values to be returned. The more examples you provide, the more accurate the response will be.

Is Parameter Required?

Select or map whether the parameter is required.

When selecting yes, OpenAI will be forced to generate a value in the output, even if no value is detected in the text.

Object Definitions

Enter parameters or map the way in which the objects should be returned.

Object Name

Enter a name for the object. Adhere to the following rules:

1. The first character must be either a letter or an underscore.

2. Do not include any commas or spaces.

3. Do not use any special symbols, other than an underscore.

Description

Enter a short description of the object.

For example, if would like OpenAI to search for coordinates in the text, you can enter Latitude and Longitude of the location.

Properties

Enter object property details.

Parameter Name

Enter a name for the object parameter. Adhere to the following rules:

1. The first character must be either a letter or an underscore.

2. Do not include any commas or spaces.

3. Do not use any special symbols, other than an underscore.

Description

Enter a short description of the object parameter.

For example, if would like OpenAI to search for coordinates in the text, you can enter Latitude and Longitude of the location.

Data Type

Select or map the data type format in which the object parameter is returned.

If there will be multiple occurrences of similar data types, select an Array option. For example, if there are coordinates of multiple locations to be returned, select Array (text) .

Is Parameter Required?

When selecting yes, OpenAI will be forced to generate a value in the output, even if no value is detected in the text.

Accepts an array of images as an input and provides an analysis result for each of the images following the instructions specified in the prompt.

Connection

Establish a connection to your OpenAI account.

Prompt

Enter instructions for how to analyze the image(s).

Images

Add or map the images you want to analyze.

You can add images by entering an Image URL or Image file data.

Image URL: Enter the URL address to a public resource of the image. For example, https://getmyimage.com/myimage.png.

Image file: Map or select the binary data of the image. You can retrieve the binary data of an image using the HTTP: Get a file module, or another app such as Dropbox.

Model

Select or map the model you want to use.

Note

Ensure you have access to the model you want to work with. Only models available for the account are listed for selection.

Max Completion Tokens

Enter the maximum number of tokens to use for the completion. The default value is 2048.

Temperature

Specify the sampling temperature to use.

Higher temperatures generate more diverse and creative responses. For example, 0.8. Lower temperatures generate more focused and well-defined responses. For example, 0.2. The default value is 1.

The value must be lower than or equal to 2.

Top P

Specify the Top P value to use nucleus sampling. This will consider the results of the tokens with probability mass. The default value is 1.

The value must be lower than or equal to 1.

Number

Enter the number of responses to generate. If more than 1 response is generated, the results can be found in the module's output within Choices. The default value is 1.

Generates an image with DALL E given a prompt.

Connection

Establish a connection to your OpenAI account.

Model

Select or map the model you want to use.

Note

Ensure you have access to the model you want to work with. Only models available for the account are listed for selection.

Prompt

Enter details of the images you want to generate. The maximum number of characters allowed is 1000 if your model is Dall-E 2 and 4000 if your model is Dall-E 3.

Size

Select or map the size of the images to generate. It must be one of 256x256512x512, or 1024x1024.

Number

Enter the number of images to generate. Enter a number between 1 and 10.

Response Format

Select or map the format in which the generated images are returned.

If you select Base64-encoded PNG, the raw PNG image can be obtained by using the IML function, toBinary(<module_no>.data[].b64_json; base64) in the subsequent modules.

Quality

Select the quality of the generated image. HD generates images with finer details and greater consistency across the image.

Edits or extends an image.

Connection

Establish a connection to your OpenAI account.

Image

Enter an image to edit. The file must be in PNG format, less than 4 MB, and square. If a Mask file is not provided, the image must have transparency, which will be used as the mask.

Prompt

Enter details of the images you want to edit. The maximum number of characters allowed is 1000.

Mask

Enter an additional image with fully transparent areas (where alpha is zero). The transparent areas indicate where the Image will be edited. The file must be in PNG format, less than 4 MB, and the same dimensions as Image above.

Number

Enter the number of images to generate. Enter a number between 1 and 10.

Size

Select or map the size of the images to generate. It must be one of 256x256512x512, or 1024x1024.

Response Format

Select or map the format in which the generated images are returned.

If you select Base64-encoded PNG, the raw PNG image can be obtained by using the IML function, toBinary(<module_no>.data[].b64_json; base64) in the subsequent modules.

Creates a translation of an audio into English.

Connection

Establish a connection to your OpenAI account.

File Name

Enter the name of the file you want to translate.

File Data

Enter the data of the file you want to translate.

Model

Select or map the model you want to use. Refer to the OpenAI Audio API documentation for information on available models.

Note

Ensure you have access to the model you want to work with. Only models available for the account are listed for selection.

Prompt

Enter text to guide the model's style or continue a previous audio segment. (Optional) The prompt should be in English.

Temperature

Enter a sampling temperature between 0 and 1. Higher values, such as 0.8, will make the output more random. Lower values, such as 0.2, will make it more focused and deterministic.

Creates a transcription of an audio to text.

Connection

Establish a connection to your OpenAI account.

File Name

Enter the name of the file you want to transcribe.

File Data

Enter the data of the file you want to transcribe.

Model

Select or map the model you want to use. Refer to the OpenAI Audio API documentation for information on available models.

Note

Ensure you have access to the model you want to work with. Only models available for the account are listed for selection.

Prompt

Enter text to guide the model's style or continue a previous audio segment. (Optional) The prompt should be in the same language as the audio.

Temperature

Enter a sampling temperature between 0 and 1. Higher values, such as 0.8, will make the output more random. Lower values, such as 0.2, will make it more focused and deterministic.

Language

Enter the two letter ISO code of the input audio's language.

Qualifies whether the provided image or text(s) contains violent, hateful, illicit, or adult content.

Connection

Establish a connection to your OpenAI account.

Input format

Select the format of the input content:

  • Text

  • Array of texts

  • Image

Input text

Enter the text for which you want to create the moderation. The module will classify the content against OpenAI's Content Policy.

Model

Select or map the model you want to use. If empty, the default model will change based on the selected input format.

Generates an audio file based on the text input and settings specified.

Connection

Establish a connection to your OpenAI account.

Input

Enter text to generate into audio.

The text must be between 1 and 4096 characters long.

Model

Select or map the model you want to use.

Note

Ensure you have access to the model you want to work with. Only models available for the account are listed for selection.

Voice

Select or map the voice to use in the audio. For voice samples, see the OpenAI Voice Options guide.

Output Filename

Enter a name for the generated audio file. Do not include the file extension.

Response Format

Select or map the file format for the generated audio file.

Speed

Enter a value for the speed of the audio.

This must be a number between 0.25 and 4.

Files

Adds files to a specified vector store or, if not specified, creates a new vector store based on the configuration.

Connection

Establish a connection to your OpenAI account.

Batch Create Mode

Choose if you would like to create a new vector store or choose an existing vector store.

Vector Store ID

Select the vector store.

Vector Store Name

Enter a name for a new vector store.

Days Expires After

Enter the number of days of inactivity for the vector store. When the specified amount of days have passed, the vector store expires.

File IDs

Select the files to add to a vector store. For a list of supported file formats, see Open AI File Search supported files.

Uploads a file that will be available for further usage across the OpenAI platform.

Connection

Establish a connection to your OpenAI account.

File Name

Enter the name of the file to be uploaded, including the file extension. For example, myFile.png.

Supported file types depend on the option selected in the Purpose field below. The Fine Tune purpose supports only .jsonl files.

See the Assistants Tools guide to learn more about the supported file types.

File Data

Enter the file data to be uploaded. You can retrieve file data using the HTTP: Get a file module.

Purpose

Select or map the purpose. Select Assistants for Assistants and Messages and Fine Tune for Fine-tuning.

Batches

Retrieves a list of batches.

Connection

Establish a connection to your OpenAI account.

Limit

Enter the maximum number of returned results (bundles).

Retrieves details of the specified batch.

Connection

Establish a connection to your OpenAI account.

Batch ID

Select a batch to retrieve.

Creates and executes a batch of API calls.

Connection

Establish a connection to your OpenAI account.

Input File ID

Select a file to use as input to create a batch.

Endpoint

Select the endpoint to use for all requests in the batch. Note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch.

Cancels an in-progress batch. The batch will be in status "cancelling" for up to 10 minutes before changing to "cancelled", where it will have partial results (if any) available in the output file.

Connection

Establish a connection to your OpenAI account.

Batch ID

Select a batch to cancel.

Other

Performs an arbitrary authorized API call.

Note

For the list of available endpoints, see OpenAI ChatGPT API Documentation.

Connection

Establish a connection to your OpenAI account.

URL

Enter a path relative to https://api.openai.com. For example, /v1/models.

Method

GET

to retrieve information for an entry.

POST

to create a new entry.

PUT

to update/replace an existing entry.

PATCH

to make a partial entry update.

DELETE

to delete an entry.

Headers

Enter the desired request headers. You don't have to add authorization headers; we already did that for you.

Query String

Enter the request query string.

Body

Enter the body content for your API call.

Examples of Use - List Models

The following API call returns all pages from your OpenAI account.

URL:/v1/models

Method: GET

openai-5.png

The search matches can be found in the module's Output under Bundle > Body > data

In our example, 69 models were returned:

openai-4.png