Generate Response
Generates text responses from Gemini models based on prompt or global instructions, with the highest priority. See more information here.
Action Settings
| Setting | Description |
|---|---|
| Instructions | String, optional. Instructions for the Model. |
| Model | Enum, required. The name of the Model to use for generating the completion. |
| Temperature | Double, optional. Controls the randomness of the output. The default value varies by Model. Values can range from [0.0, 2.0] |
| TopP | Double, optional. The maximum cumulative probability of tokens to consider when sampling. |
| TopK | Int, optional. The maximum number of tokens to consider when sampling. |
| Seed | Int, optional. Seed used in decoding. If not set, the request uses a randomly generated seed. |
| MaxOutputTokens | Int32, optional. The maximum number of tokens to include in a response candidate. |
| StopSequences | String, optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response. |
| SafetyCategory | Enum, optional. The safety category to configure a threshold for. Valid values are: HARM_CATEGORY_SEXUALLY_EXPLICIT,HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_DANGEROUS_CONTENT. |
| SafetyThreshold | Enum, optional. The safety threshold to configure for the safety category. Valid values are: BLOCK_LOW_AND_ABOVE, BLOCK_MEDIUM_AND_ABOVE, BLOCK_HIGH_AND_ABOVE, BLOCK_NONE. |
Action Parameters
| Parameter | Description |
|---|---|
| Prompt | String, required. The user-provided text to be processed by the Model. |
| CachedContent | String, optional. The name of the content cached to use as context to serve the prediction. Format: cachedContents/{cachedContent}. |
Result
The generated response is returned in the Response_Result field. The action also returns the following fields: Response_Id, Response_Result, Response_FinishReason, Response_FinishMessage, Response_PromptTokenCount, Response_CachedContentTokenCount, Response_CandidatesTokenCount, Response_TotalTokenCount, Response_PromptFeedbackBlockReason, Response_PromptFeedbackSafetyRatings, Response_ModelVersion, Response_ModelStatusModelStage, Response_ModelStatusRetirementTime, Response_ModelStatusMessage.
Was this page helpful?