Description
Local Language Practice (LLP) is a desktop application to practice languages through a chat roleplay with local Large Language Models (LLMs).
I wrote a sketch version of it around May of 2024, and released an official version in April 16th, 2025.
The application assumes you have an LLM server running on your machine (this is a deliberate choice), by default on port 8080 but that is configurable. There are two main features in the application, both present in the main view: a chat between two characters (the human user acting as one, with the LLM acting as the other) in a particular language and widgets for translating to-and-from between that language and English. The default scene is a conversation between two robots in a futuristic world, but custom scenes can also be imported to be played out. As of the first released version, the supported languages are French, German, Portuguese and Spanish.
I developed LLP as a tool for personal usage, following closely my particular flow for practicing languages. Although it is not intended for use by a general audience, I expected it to be functional and useful from the start, and not merely a proof of concept.
Context
I developed LLP as I was exploring how to build systems using generative AI technologies for the first time. I blogged about the set of 4 projects that came out of this exercise in my post entitled "First Steps Into AI Engineering". LLP was the third of these 4, and the first to work with more than one chat context.
As described in the post just mentioned, for LLP I wanted to write most of the code myself, without relying too much on frameworks, so as to get a better feeling about working with these models. It is also intentional to explore only open and local AI models, as it is my intention to find out how far can we go working only with models that can be fully personal and owned by its users.
Highlights
LLP was the first of the First Steps Into AI Engineering projects that I originally intended as an useful application. While JenAI eventually became my main way of interacting with LLMs, at first it was going to be just a proof of concept to learn how to handle conversation state - LLP, on the contrary, was planned from the start to automate one of my personal flows for learning languages. This made the entire development process very satisfying, as with each new part implemented I was able to really see the benefits it added. It also made working with user stories and a project board to track development very intuitive and fruitful. While previously I have used LLMs to generate user stories (SnakeJS in particular), this time I found it more efficient to just write them myself, as I had a very good idea of everything I wanted in the application and how to communicate that.
The flow I wanted to automate with the application is the one I use to bridge the gap between self-contained lessons and real-world usage, when learning languages. While I appreciate and enjoy doing lessons, both in applications such as Duolingo and in more traditional textbook format, I find that by themselves they do not really prepare one to actually use the language being studied in real scenarios. So I tend to complement them with other forms of practicing the language, such as listening to music with lyrics in that language, or reading books written in it. More recently, with LLMs becoming really powerful and popular, I started using the frontier models (as they tend to be much more reliable for multilanguage usage) to simulate conversation in the languages I study, as a way to get a more immersive practice - it has proved to be quite effective. While the conversation is the biggest part of this practice, I find it useful to also have separate tabs open with translation tools, in which I can paste any part of the conversation which I do not feel confident about (either in understanding or in creating) and immediately get a translation. I designed LLP so that I could have both these things in the same view, so that I would never need to leave it for the entire practice session.
A delightful challenge in this project was crafting the prompts to get the LLM to behave exactly as expected for each process. While the conversation one was the closest to what I had done in the past, especially with JenAI, it also had the nuance of needing to stay in the specific language being practiced, and of staying coherent with the AI character while also understanding it should act as a tool for language practice. It took some iterations to get the conversation prompts right, but a combination of a system prompt setting up the scene and the task, and a new system prompt (injected and removed from the conversation history as needed) right before each response did the trick reasonably well. The prompt for the translation process had to be more specific and less open ended than the conversation one, and I was able to leverage my learnings from working on Chargen when crafting it. One of the tricks I have been finding very useful to get LLM models to generate outputs in a specific format, even when they were not specifically trained for that, is to just tell them in the prompt that their response will be piped directly to an automated system that will process it, without any human being involved - this alone seems to prevent most models from adding unnecessary explanations or deviating from the specified format, in my experience.
I am very fond of the feature for loading custom scenes, as it provides a way to explore unique settings without needing to change the source code and generating a new release. Since I consider myself the only audience for the application for now, I am happy to keep it based on manually customizing JSON files, instead of creating a separate custom creator app or feature inside LLP for this - it might change in the future, if I decide to make a more widely appealing version of the application.
Future Expansions
Saving session state
The main feature I am considering to add is the ability of saving practice sessions with the conversation history to be loaded later. The application already has the capability to load custom scenes, but they always start the conversation from scratch. Being able to save a conversation to continue at a later time should be very useful (as I have already seen with JenAI). I am not sure when I will have more time to work on this, though.
Using industry standard request framework
Another change I would like to do is replacing the custom logic to interact with the LLM server, which I have been reusing and slightly tweaking and improving between each project of this series, with a more industry standard one. While the classes I have been using have been great for me as a learning tool, and have not so far caused any major issues, for LLP in specific they are problematic: they constantly break when using any slightly more complex character (such as accented versions of vowels), which led me to create helper sanitizing functions to just remove any problematic character and replace it with a safer version whenever possible. For English-only conversations, this approach works - but for practicing other languages it messes up the specifics of the language quite bad. Most non-English languages have a heavy usage of non-ASCII characters, and learning them with an application that just ignores these characters is obviously a bad idea. I am looking for alternatives, and in special for Spring AI as a good replacement to enhance this.
Adding new languages
Adding new languages is also something I will quite probably do in the future, mostly whenever I decide to pick up another language to learn. I have decided to hard code the supported languages in the application, and I do not intend to change this in the future. Given that I consider myself the only audience for the app, coding a new one whenever I need to is a much smaller hassle than making the list dynamic in the first place.
Generally usable version with another framework
Of all projects in the First Steps Into AI Engineering series, I see LLP as the most potentially useful for a large audience. However, I do not think that an application built on top of Swing is viable for a modern public, in 2025. So, one of the things I might do in the future (although I am not currently seeing this as a priority) is to create a new application with the same features as LLP, but coded in a more modern framework (probably with a different, more frontend-friendly language as well). This would give me an opportunity to recreate several parts of the application in a more abstract and dynamic manner.
Custom scene creator
Finally, the last expansion that I imagine for LLP would be a system (either in-app or as a separate project) to create custom scenes. Currently, this depends on manually editing JSON files - which is perfectly fine for me, but would be irritating for anyone else. If I decide to create a version for general use, this will definitely be in scope, but for now I do not have plans to implement it.
Setup
LLP was the first of the First Steps Into AI Engineering projects that I wrote from the start with the intention of using as a portfolio project. Although I started this series as a space for exploration and learning, over time the projects acquired a level of scope and complexity that made me elevate them into more serious projects (although still not intended for a general audience). Due to this, I configured all of the foundations I use for portfolio projects.
I use Github Actions to generate a new release for LLP whenever new code is pushed into the main branch and alters relevant core files of the project.
I have a changelog file, a file with guidelines about contributing and an architecture.md file (an idea I adapted from this great article), beyond the usual readme file as documentation.
I also included automated tests for everything except the UI code. As of version 1.1.0, all non-UI packages had more than 75% method test coverage. I made extensive use of Test-Driven Development (TDD) for the development of the project.
Links
Source code: Github
Executable: Releases