Ollama
Ensure that Ollama is already installed and running on a server. You will need the host address where Ollama is installed to proceed.
To use Ollama as an AI provider in Team Edition, configure it as follows:
- As an administrator, specify the host address of your Ollama server in the Hostname field, ensuring it follows the format
http://host:port - Insert the Model, Context Size, and Temperature you need for your integration
- Save changes
Engine settings¶
| Setting | Description | default |
|---|---|---|
| Hostname | Specify the address of the Ollama server in the format http://host:port. This is required for connecting and loading available models. |
|
| Model | Choose the AI model. | llama2:latest |
| Context size | Choose the context size between 2048 and 32768. A larger number allows the AI to use more data for better answers but may slow down response time. Choose based on your balance of accuracy and speed. |
3000 |
| Temperature | Control AI's creativity from 0.0 (more precise) to 0.9 (more diverse). Note that higher temperature can lead to less predictable results. |
0.0 |
| Write AI queries to debug log | Logs your AI requests. These entries go to the server log. | false |