Using Local AI Models with Ollama and Prompt Mixer

April 1, 2024

In the world of artificial intelligence, privacy and control over data are becoming increasingly important. Cloud-based AI services, while convenient, can raise concerns about data security and privacy. Fortunately, there are tools available that allow you to run AI models locally on your machine, giving you full control over your data while still harnessing the power of advanced language models.

One such tool is Ollama, an open-source platform designed for running large language models locally. Combined with Prompt Mixer, a powerful prompt engineering tool, you can create and run prompts using local AI models, all without relying on cloud-based services.

Getting Started with Local AI Models

To get started with local AI models, you'll need to follow a few simple steps:

1. Install Ollama

Ollama is the backbone of this setup, enabling you to run large language models on your local machine. Head over to the official Ollama website and follow the installation instructions for your operating system.

2. Download AI Models

Ollama supports various AI models, including llama2, mistral, and others. Explore the available models on the Ollama website and download the ones you wish to use. 

3. Install Prompt Mixer

Prompt Mixer is a desktop application that simplifies the process of creating and managing prompts. Visit the Prompt Mixer download page and follow the installation instructions for your operating system.

4. Set Up the Ollama Connector

To use local models with Prompt Mixer, you'll need to install the Ollama Connector, a plugin that facilitates communication between Prompt Mixer and Ollama. Open Prompt Mixer, navigate to the "Connectors" tab, and search for the "Ollama Connector." Install the connector and follow any additional instructions provided.

Unleashing the Power of Local AI Models

With the setup completed, you can now start creating and running prompts using local AI models. Here's how it works:

1. Create or Open a Prompt

In Prompt Mixer, create a new prompt or open an existing one from your library.

2. Select the Local Model

Choose the local model you want to use for generating responses to your prompt.

3. Run the Prompt

Execute the prompt, and Prompt Mixer will communicate with Ollama to generate the output using the selected local model.

The Benefits of Using Local AI Models

Running AI models locally offers several advantages over cloud-based services:

1. Data Privacy and Security

By keeping your data on your local machine, you eliminate the risk of data breaches or unauthorized access from third parties.

2. Control Over Model Selection

With local models, you have the freedom to choose the specific AI model you want to use, tailoring the experience to your needs and preferences.

3. Offline Capabilities

Once you have the necessary setup, you can run prompts and generate outputs without an internet connection, ensuring uninterrupted productivity.

4. Cost Savings

While there may be upfront costs for hardware and software, running local models can be more cost-effective in the long run compared to paying for cloud-based AI services.

Considerations and Limitations

It's important to note that running large language models locally can be resource-intensive, requiring a machine with sufficient computing power, memory, and storage. Additionally, the performance of local models may vary depending on the hardware and software configurations.


With tools like Ollama and Prompt Mixer, you can harness the power of advanced AI models while maintaining complete control over your data and privacy. By following the steps outlined in this article, you can set up a local AI environment and start creating and running prompts using local models. Embrace the future of AI while prioritizing data security and autonomy.