OpenAI WorkFlow Action

OpenAI Workflow Action: Configuration, Parameters & Best Practices

This article explains in detail how an OpenAI Workflow Action is structured within the DataEngine, which input and output parameters are available, and what to consider to achieve the best possible response quality.

Input & Output Parameters (Fields of the WorkFlow module)

Every OpenAI workflow requires clearly defined fields for processing a request. These fields are assigned within the workflow module:

  • Source Field : The source field is the input field whose content is sent to the OpenAI model. It serves as the prompt, i.e., the textual instruction that the model should process.
  • JSON Request Field : The field in which the complete JSON request is saved. This is particularly helpful for traceability and debugging.
  • Destination Field : This field stores the generated response from the model. It is recommended to use fields with a WYSIWYG editor, as these support HTML formatting and allow for visually appealing output.
  • JSON Response Field: The field in which the complete JSON response is saved. Ideal for support teams and developers to understand how the model responded to the request.

Request Body Parameters – Description & Info

A Chat Completion request consists of several important parameters that influence how the model behaves. Below is an overview of general parameters shared accross models. Some models may support additional, model-specific parameters – you can find these in the official OpenAI-Documentation.

  • Model: The model parameter defines the specific GPT model used to generate the response. Because available models change frequently, always refer to the latest OpenAI documentation to ensure you are using the correct model. Typically used models include versions like “text-davinci-003”, but there are specialized models for particular applications as well. If you are unsure which model to use, OpenAI provides a web-based platform where you can experiment with different models.
    Important: Current models and pricing must always be checked directly at OpenAI. The pricing can be found here.
  • Prompt: The prompt field defines the actual instruction for the model – what it should do, how it should answer, and which data (e.g., via placeholders) should be included. Placeholders from other modules can be used to inject dynamic content. A placeholder begins with “/”. The system automatically replaces it with the field´s content when the workflow executes.
    Example: /exp_person.name inserts the name from module exp_person
  • Tools: Under Tools, extensions such as Web Search are available. This allows the model to pull in current information from the internet and include it directly in the response. This is especially useful for research, price information, fact-checking and market intelligence.
  • Target: Here you can specify where the generated response should be saved. A WYSIWYG-datatype is recommended, as it supports HTML formatting and thus enables an appealing presentation of the results.
  • Playground: The OpenAI Playground allows you to try out models, test parameters extensively, observe token usage in real time and simulate example prompts. The Playground is ideal for optimizing a workflow before implementing it inside the DataEngine.

Summary

An OpenAI workflow essentially consists of:

  • An Input field (prompt)
  • JSON request
  • Model parameters
  • Tools such as Web Search
  • Target fields for formatted output
  • Complete JSON response for traceability

With the right configuration and clearly defined prompts, you can build a powerful, transparent, and well-documented AI workflow.

Best Practices

To ensure an OpenAI workflow operates reliably, efficiently, and with consistent high quality, the following points should be considered when creating prompts.

  • Formulate clear, unambiguous instructions – always state explicitly what the model should do and what it should not do. The clearer the assignment, the better the output.
  • Provide context – the model does not know your use case unless you tell it. Adding context directly in the prompt helps the model respond more accurately and appropriately.
  • Define rules and limitations – If something should not happen, state it explicitly. These constraints signigicantly increase reliability.
  • Specify the output format – The model follows your formatting instructions, such as: “Output the answer in a single sentence” or “Use a three-point bullet list”. You can also provide an example – often more effective than only a verbal description.
  • Only one task per prompt – If a prompt asks for multiple actions at once, the model may mix content or produce inconsistent results. One prompt = One task.

Marini Systems GmbH | Website | Privacy Statement | Legal