Ollama
Note
This feature is available in Lite, Enterprise, Ultimate and Team editions only.
To use Ollama as an AI provider in DBeaver, configure it as follows:
Ensure that Ollama is already installed and running on a server.
For installation instructions, see the Ollama docs.
You will need the host address where Ollama is installed to proceed.
- Specify the host address of your Ollama server in the Instance host field, ensuring it follows the
format
http://host:port
. - Click Load Models. If the host address is correct, DBeaver will display the available models from your Ollama server in the Model dropdown menu.
- Select the model you need for your integration.
- Apply the changes.
Engine settings¶
Setting | Description | Default |
---|---|---|
Instance host | Specify the address of the Ollama server in the format http://host:port . This is required for connecting and loading available models. |
http://localhost:11434 |
Model | Choose the AI model. | |
Refresh models | Re-reads the list of all enabled models from the provider. | |
Context size | Choose the context size between 2048 and 32768 . A larger number allows the AI to use more data for better answers but may slow down response time. Choose based on your balance of accuracy and speed. |
3000 |
Temperature | Control AI's creativity from 0.0 (more precise) to 0.9 (more diverse). Note that higher temperature can lead to less predictable results. |
0.0 |
Write Ollama queries to debug log | Logs your AI requests. For more details on logging, see Log Viewer. | disabled |