Team Edition Documentation

DOWNLOAD pdf

AI Assistance settings

To use the AI assistance in Team Edition, you’ll need to complete a few setup steps. This article explains how to add your service credentials, and adjust specific settings for your AI service.

Basic setup

AI assistance is enabled by default. Switch from OpenAI to Azure OpenAI, Gemini and Ollama by using the Service dropdown menu. To get started, follow this guide to add your service keys and configure settings specific to the AI service you’re using:

OpenAI

  1. Sign up on the OpenAI platform.
  2. Navigate to the API Keys section and generate a new secret key.
  3. Insert this key into Team Edition's API token setting.
  4. Choose the model.

    Team Edition currently supports:

    • gpt-3.5-turbo (recommended for SQL).
    • gpt-3.5-turbo-instruct.
    • gpt-4.
    • gpt-4-turbo.
    • gpt-4o.
    • gpt-4o-mini.
  5. Apply the changes.

Note: OpenAI services are available in specific countries. Consult the supported countries list to verify availability in your location.

Azure AI

  1. Sign up on the Azure platform.
  2. Navigate to the Azure Portal and create a new AI service under the AI + Machine Learning section.
  3. Generate and copy the credentials for the newly created service.
  4. Insert these credentials into Team Edition's Engine Settings.
  5. Apply the changes.

Google Gemini

  1. Sign up on the Google Cloud Platform.
  2. Navigate to the Google Cloud Console and create a new project.
  3. Enable the Gemini API for your project by searching for the Gemini API in the marketplace and clicking Enable.
  4. Create credentials for your project by navigating to the Credentials page under APIs & Services. Choose Create credentials and select the appropriate type for your Gemini integration.
  5. Copy the generated credentials.
  6. Insert these credentials into Team Edition's Engine Settings.
  7. Apply the changes.

Note: Google Gemini services are subject to regional availability. Check the list of available regions to ensure access in your area.

Ollama

Ensure that Ollama is already installed and running on a server. You will need the host address where Ollama is installed to proceed.

  1. Specify the host address of your Ollama server in the Instance host field, ensuring it follows the format http://host:port.
  2. Click Load Models. If the host address is correct, Team Edition will display the available models from your Ollama server in the Model dropdown menu.
  3. Select the model you need for your integration.
  4. Apply the changes.

Preferences

For configuring specific settings, navigate to Window -> Preferences -> General -> AI. | Setting | Description | |-------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------| | Enable smart completion | Displays the AI assistance in the SQL Editor. | | Include source in query comment | Shows your original request above the AI-generated query in the SQL Editor. | | Execute SQL immediately | Runs the translated SQL query immediately after generation. | | Send attribute type information | Send attribute type information to the AI vendor. It makes better completion, but consumes more tokens. | | Send object description | Send object description to the AI vendor. Improves completion, but may consume significant amount of tokens. | | API token | Input your secret key from the OpenAI platform. | | Model | Choose the AI model. | | Temperature | Control AI's creativity from 0.0 (more precise) to 0.9 (more diverse).
Note that higher temperature can lead to less predictable results. | | Write GPT/Ollama queries to debug log | Logs your AI requests. For more details on logging, see Log Viewer. | | Send foreign keys information | Helps AI understand table relationships. | | Send unique keys and indexes information | Assists AI in crafting complex queries. | | Format SQL query | Adds formatting to the generated SQL. | | Table join rule | Choose between explicit JOIN or JOIN with sub-queries. | | Endpoint | Configure a custom endpoint URL for Azure OpenAPI interactions. | | API version | Select the version of the API you wish to use. | | Deployment | Specify the deployment name chosen during model deployment. | | Context size | Choose the context size between 2048 and 32768. A larger number allows the AI to use more data for better answers but may slow down response time. Choose based on your balance of accuracy and speed. |

Note: The availability of these settings may vary depending on the service you are using.