Team Edition Documentation

DOWNLOAD pdf

AI Assistance settings

To use the AI assistance in Team Edition, you’ll need to complete a few setup steps. This article explains how to add your service credentials, and adjust specific settings for your AI service.

Switch from OpenAI to Azure OpenAI, Gemini and Ollama by using the Engine dropdown menu. To get started, follow this guide to add your service keys and configure settings specific to the AI service you’re using:

OpenAI
  1. Sign up on the OpenAI platform.
  2. Navigate to the API Keys section and generate a new secret key.
  3. Insert this key into Team Edition's Engine Settings.
  4. Choose the model.

    Team Edition currently supports:

    • gpt-3.5-turbo (recommended for SQL).
    • gpt-3.5-turbo-instruct.
    • gpt-4.
    • gpt-4-turbo.
    • gpt-4o.
    • gpt-4o-mini.
  5. Save changes.

Note: OpenAI services are available in specific countries. Consult the supported countries list to verify availability in your location.

Azure AI
  1. Sign up on the Azure platform.
  2. Navigate to the Azure Portal and create a new AI service under the AI + Machine Learning section.
  3. Generate and copy the credentials for the newly created service.
  4. Insert these credentials into Team Edition's Engine Settings.
  5. Save changes.
Google Gemini
  1. Sign up on the Google Cloud Platform.
  2. Navigate to the Google Cloud Console and create a new project.
  3. Enable the Gemini API for your project by searching for the Gemini API in the marketplace and clicking Enable.
  4. Create credentials for your project by navigating to the Credentials page under APIs & Services. Choose Create credentials and select the appropriate type for your Gemini integration.
  5. Insert these credentials into Team Edition's Engine Settings.
  6. Save changes.

Note: Google Gemini services are subject to regional availability. Check the list of available regions to ensure access in your area.

Ollama

Ensure that Ollama is already installed and running on a server. You will need the host address where Ollama is installed to proceed.

  1. Specify the host address of your Ollama server in the Hostname field, ensuring it follows the format http://host:port.
  2. Insert the Model, Context Size, and Temperature you need for your integration.
  3. Save changes.

Preferences

For configuring specific settings, navigate to Settings -> Administration -> AI Settings -> Engine settings.

Setting Description
API token Input your secret key from the OpenAI platform.
Model Choose the AI model.
Temperature Control AI's creativity from 0.0 (more precise) to 0.9 (more diverse).
Note that higher temperature can lead to less predictable results.
Write GPT queries to debug log Logs your AI requests. For more details on logging, see Log Viewer.
Endpoint Configure a custom endpoint URL for Azure OpenAPI interactions.
API version Select the version of the API you wish to use.
Deployment Specify the deployment name chosen during model deployment.
Context size Choose the context size between 2048 and 32768. A larger number allows the AI to use more data for better answers but may slow down response time. Choose based on your balance of accuracy and speed.

Note: The availability of these settings may vary depending on the service you are using.

Disabling AI features

To hide the AI smart completion icon in the SQL Editor and disable AI commands in the SQL Editor:

  1. Navigate to Administration -> Server Configuration -> Services section.
  2. Deselect AI option.