Prompt Tool
User Role Requirements
User Role* | Tool/Feature Access |
|---|---|
Full User | ✓ |
Basic User | X |
*Applies to Alteryx One Professional and Enterprise Edition customers on Designer versions 2025.1+.
Use the Prompt tool to send prompts to a Large Language Model (LLM) and then receive the model’s response as an output. You can also configure LLM parameters to tune the model’s output.
Tool Components
The Prompt tool has 3 anchors (2 input and 1 output):
M input anchor: (Optional) Use the M input anchor to connect the model connection settings from the
LLM Override Tool. Alternatively, set up a connection within this tool.D input anchor: (Optional) Use the D input anchor to connect data you want to add to your prompt. The Prompt tool accepts standard data types (for example, String, Numeric, and DateTime) in addition to Blob data from the
Blob Input Tool.Output anchor: Use the output anchor to pass the model’s response downstream.
Configure the Tool
Connect to the AI Model Service in Alteryx One
If you aren’t using the LLM Override tool to provide model connection settings to the Prompt tool, you must set up a connection to the AI Model Service in Alteryx One.
For first-time setup of the Prompt tool, create an Alteryx Link to your Alteryx One workspace. If you signed in to Designer through Alteryx One, you should see a connection already set up for that workspace.
LLM Connection
Important
Before you can select an LLM, a must create an LLM connection in Alteryx One.
Use the LLM Provider dropdown to select the provider you want to use in your workflow. If you connected the LLM Override tool, this option is disabled.
Use the Select Model dropdown to select an available model from your LLM Provider. If you connected the LLM Override tool and selected a specific model, this option is disabled.
Prompt Settings
Use the Prompt Settings section to compose your prompt and configure the data columns associated with the prompt and response.
Enter your prompt in the Prompt Template field. For an estimate of the token count of your prompt, refer to the Token Count at the bottom of the Prompt Template field. The Token Count doesn’t account for additional tokens that might come from inserted data columns.
Include upstream data in your prompt for more advanced analysis. The Prompt tool creates a new prompt for each row of your incoming data. The tool then sends an LLM request for each of these rows.
To insert an input data column, you can either…
Enter an opening bracket (
[) in the text field to bring up the column selection menu. You can also type out the column name within brackets ([]).Select a column from the Insert Field dropdown.
To attach unstructured data, like image or PDF files, select the column that contains the unstructured data from the Attach Non-Text Columns dropdown. Use a
Blob Input Tool to bring your images and PDFs into your workflow.Note
Support for unstructured data, such as images and PDFs, depends on your LLM provider and the model you select. Go to your LLM provider’s documentation for details about supported unstructured or multimodal data types.
Enter the Response Column Name. This column contains the LLM response to your prompt.
Run the workflow.
Prompt Builder
Use Prompt Builder to quickly test different prompts and model configurations. You can then compare the results with prompt history. To experiment with different models and model configurations, run the workflow with your initial prompt and then select Refine and Test in Prompt Builder to open the Prompt Builder window.
Important
Prompt Builder requires data connected to the D input anchor.
Prompt Workspace Tab
Use the Prompt Workspace tab to enter your prompts and update model configurations:
Use the Select Model dropdown to select an available model from your LLM Provider.
Enter the Number of Records to Test. If you have a large dataset, use this setting to limit the number of records tested with your prompt.
Configure the model’s parameters for Temperature, Max Output Tokens, and TopP. Refer to the Advanced Model Configuration Settings section for parameter descriptions.
Enter or update your prompt in the Prompt Template text field.
Tip
To get help creating a ready-to-use prompt, select Generate a Prompt for Me and then describe your task. To use this feature, your Alteryx One account must have access to Altery Copilot.
Select Test and Run to view the sample response for each row.
If you like the responses of the new prompt, select Save Prompt to Canvas to update the Prompt tool with the new prompt and model configuration.
History Tab
Use the History tab to view your past prompts, model parameters, and a sample response.
For each past prompt, you can…
Add to Canvas: Update the Prompt tool with the selected prompt and associated model parameters.
Edit Prompt: Return to the Configuration tab with the selected prompt and associated model parameters. Select the 3-dot menu next to Add to Canvas to find this option.
Download Results: Save a CSV file containing the prompt and model parameters for the current row in the History tab.
Warning
Make sure to save your prompt before you leave the Prompt Builder window. You will lose your prompt history when you select Close.
Error Handling
When an error occurs, choose your error handling option from the On Error dropdown:
Error - Stop Processing Records: Throw an error in the Results window and stop processing records.
Warning - Continue Processing Records: Throw a warning in the Results window, but continue processing records.
Ignore - Continue Processing Records: Ignore columns that differ and continue processing records.
Advanced Model Configuration Settings
Use the Advanced Model Configuration section to configure the model’s parameters:
Temperature: Controls the randomness of the model's output as a number between 0 and 2. The default value is 1.
Lower values provide more reliable and consistent responses.
Higher values provide more creative and random responses, but can also become illogical.
Max Output Tokens: The maximum number of tokens that the LLM can include in a response. Each token is about 3/4 of a word. Tokens are basic input and output units in a LLMs. They’re text chunks that can be words, character sets, or combinations of words and punctuation. Refer to your LLM provider and specific model for max available output tokens.
TopP: Controls which output tokens the model samples from and ranges from 0 to 1. The model selects from the most to least probable tokens until the sum of their probabilities equals the TopP value. For example, if the TopP value is 0.8 and you have 3 tokens with probabilities of 0.5, 0.3, and 0.2, then the model only selects from the tokens with 0.5 and 0.3 probability (sum of 0.8). Lower values result in more consistent responses, while higher values result in more random responses.
Output
The tool outputs 2 string data columns.
LLM Prompt Column: Contains your prompt.
LLM Response Column: Contains the response from your LLM Provider and Model.
