Artificial Intelligence
Mobile Development

Easy Flutter Integration with AI Using Mistral AI

text on a black background

X min read


article content

Easy Flutter Integration with AI Using Mistral AI

AI lets developers add new features and capabilities to mobile applications. But current smartphones aren’t powerful enough to run large language models (LLM) AI models offline efficiently. Mistral AI API is an easy way to access cloud-based AI and create more complex AI functionalities in your app. In this article, we’ll show you how to use a LLM in the Flutter project by using Mistral AI API. We’ve created an open-source package for Flutter AI integration with Mistral AI.

What Is Mistral AI?

Mistral AI is a French company that provides AI services. Mistral AI is particularly valuable in projects operating within the European Union as the company is GDPR compliant. The team behind Mistral AI works hard to create AI that's easy to use, helpful, and trustworthy, focusing on building models that can be used in real-world contexts.

Why Use AI in a Mobile App?

Integrating AI into mobile applications significantly enhances user experiences by enabling personalized content, intuitive interfaces, and advanced features such as natural language processing, image recognition, and automated customer support. These intelligent capabilities improve engagement and can help meet individual user needs and preferences. Through AI mobile apps become more than just tools — they evolve into personalized, interactive companions that understand and anticipate user requirements.

How to Use Mistral AI in Flutter

To make the integration with Mistral AI API easier, we’ve created the open source Mistral AI Client package for Flutter. The package makes the integration process much easier, letting you focus on creating custom AI functionalities. Our client is super easy to use, and you can start using it in just a few minutes. Simply follow the README instructions in the package.

Initiating the Mistral AI Client

To start working with Mistral AI, you need to create an account on the Mistral AI platform services and generate a token. The easiest possible usage of Mistral AI Client is just saying hello to the chat:

Use Cases for Mistral AI in Mobile Applications

Now that you have the Mistral AI Client set up and ready to use, let’s go over possible use cases with implementation steps.

You can see how all of the four use cases work using our interactive demo website built in Flutter.

We’re going to cover 4 example usages of the AI chat.

  1. The first example is a simple clone of ChatGPT by OpenAI (really simple 😉).
  2. Next, we have a text summary example where you can paste some larger text and get a summary of it.
  3. The third use case is AI as a controller. That is we’ll try to recreate Google Assistant or Alexa by writing commands and controlling an imaginary smart home.
  4. Last but not least is the book search example that will allow you to find answers to questions about the book.

Let’s dive into each example, shall we?

1. AI Chat

Let’s start with a simple chatbot that can answer your questions. It's an easy way to start using AI in your app.

Full video here.

The first step is creating a simple chat interface and then using the Mistral AI Client to send messages to the chat. The chatbot will respond to your messages.

The most important thing is to keep the conversation context so that the chatbot can understand what you are talking about. You can do it by sending the previous messages to the chatbot. Let's see how to do it in the code:

First of all it's not a real chat, it's just a showcase how to keep the conversation context.

You can see that we’re sending the previous messages to the chatbot. Messages are sent in the order they were sent in the conversation. Each time the chatbot responds, we add the response to the messages and send them to the chatbot again with a new user message.

Without this ping-pong of messages the chatbot wouldn’t understand what you talked about and wouldn’t be able to respond properly.

You probably noticed the “role” field. The role is used to distinguish between “user” and “assistant” (chatbot) messages.

Last part to uncover is the “model” field. It’s used to specify which model should be used to chat. You can find more about models in the Mistral AI documentation.

That's it! You have a simple chatbot in your app. Of course it's not a production ready implementation, but it's a good start to understand how to chat with AI in your app.

2. Text Summary

You don’t always have the time to read long articles. AI helps you build a tool that will summarize long texts for you, giving you the gist in just a few seconds.

It works similarly to the chat example, but this time it’s a two-message conversation. The first message is the text you want to summarize, and the second is the text's summary. The most important part is in the first message where you have to give AI instructions on what to do, otherwise known as a prompt. 

Let's see how it works in the code:

Easy right? Let's break it down 🤓

First, we have to create a prompt for AI. It’s probably the most important and the hardest part of the whole process. There is no rule on how to create a prompt but we have some guidelines for you.

The prompt should contain the text you want to summarize and a few instructions for AI. Put the length of the summary and other guidelines that help better summarize the text. In this step, iterating and testing the prompt to find the best one is the key to success.

After creating the prompt, we can send it to AI and wait for a response. The response will contain a summary of the text. It’s that simple.

If you want to see the implementation of that example in detail, you can find it in our repository.

When exploring AI internally, we built a free Flutter-based content summarizer that uses GPT3.5 to let readers get the gist of an article or video in seconds. Here’s more about taim.

3. AI As a Controller

Imagine you have a smart home and want to control it with your voice, or you want to do some action in your app by using commands. Let's skip voice to text step and build the POC of the AI controller.

Let's start with the chat construction:

The new thing in the code is the new role system. It’s used to give the chatbot context. In this case, we’re sending a description of the controller and the context of the controller. The context is a list of available functions and the settings of the controller. The last message is the user message with the command. The AI will interpret the command and return the result. You probably wonder what the controllerDescription and controllerContext are. Let's see:

The controller description is a simple description of the controller. It defines the general purpose of the controller and says what the controller can do and what it should return.

Controller context is a collection of available functions and examples of commands and expected responses. It also contains the current state of the controller so the AI can use it to make decisions. If you want to see the implementation of that example in detail, you can find it on our repository.

That's it! You have just created a simple AI controller. You can use it to control your fictional smart home or to control your app using commands.

4. Search in a Book

The last example is a book search. It’s a simple example on how to approach a more complex problem using the AI model in your app. With the book search, you’ll be able to ask questions about the book and get answers.

So how to do a search in a book? First, let’s find a book. In our example, we will use “Twenty Thousand Leagues Under the Sea” by Jules Verne. You can get one from Project Gutenberg.

When you have the book text, it's time to prepare a search data out of it. We’ll do it in the following way:

  • Split the book into fragments of text
  • For each fragment we will calculate embeddings
  • Save the fragment and its embeddings to a json file

The first question is how to split the book into fragments of text. We’ll split the book into fragments of 1000 characters where each fragment overlaps the previous one by 100 characters.

To split the book into fragments, you can use RecursiveCharacterTextSplitter from the langchain package.

Now let’s calculate embeddings. What are embeddings? Embeddings are a way to represent a text as a vector of numbers. It's easier to compare and analyze text using embeddings. Here’s more about embeddings.

To calculate embeddings for each fragment, we’ll use our client and the embeddings method.

The file structure should look like this:

Now we have a json file with fragments and their embeddings. We can use it to search for an answer to a question about a book. So how to do it? Let's create a simple algorithm to get an answer to the question.

  1. Calculate the embeddings for the question (so we can compare it with the book fragments)
  2. Calculate the similarity between the question embeddings and the book fragments embeddings
  3. Find a couple of the most similar fragments
  4. Send a prompt for the chatbot with the most similar fragments and the question
  5. Get the answer from the chatbot

Calculating the embeddings for the question is the same as for the book fragments, so we won’t show it here. The next step is to calculate similarity between the question embeddings and the book fragments embeddings. We will use the cosine similarity to calculate the similarity.

The calculateCosineSimilarity function is a simple function to calculate the cosine similarity between two vectors. You can find a ready-to-use implementation in the repository with the full example.

Now we have similarities between the question embeddings and the book fragments embeddings. We can find the most similar fragments and send a prompt for the chatbot with the most similar fragments and the question.

With the most similar fragments available, we can send a prompt for the chatbot with them and the question to finally get the answer.

And that's it! You have a simple search in a book. For a full example, check our repository on GitHub.

Experiment with AI Implementation in Flutter

In this article, you have seen 4 examples of how to use AI in your Flutter app.

You’ve learned:

  • how to create a simple chatbot
  • how to create a text summary
  • how to use AI as a controller
  • how to search in a book

You have also learned how to use the Mistral AI Client to send messages to the chatbot and how to calculate embeddings and the similarity between the question and the book fragments.

Integrating AI into your Flutter app is easier than you think. With cloud AI, you can create more complex and smart AI functionalities in your app. LLMs are powerful tools that can be used in many different ways as we have shown in the examples. A big part of the work is integrating the AI API where our package comes in handy.

All of the examples are available on our GitHub.

Enjoy your AI journey! 🚀

Related articles

Supporting companies in becoming category leaders. We deliver full-cycle solutions for businesses of all sizes.

flutter vs electron with screen

Flutter vs. Electron for Desktop Application Development: ADB App Testing

Explore the performance differences between Flutter and Electron for desktop apps.

Looking for Flutter developers?

Contact us
Cookie Consent

By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.