Sunday, August 31, 2025

Monthly Recap - 2025-07 - July

July was completely a vacation month. I took a month off of work before switching areas inside the company, and used this opportunity to also disconnect from software development as a whole for a while. After a very exhausting period working on legacy code, methodology and mindset, I really needed this time off.

This month's recap will be a short one, but nevertheless here's what went on.


Achievements

Vacations

As mentioned in the introduction, I had vacations during all July. I used this time mostly to pursue other interests, such as music (I found a new favorite Youtube genre: music reaction videos) and gaming (though I was not able to finish my Baldur's Gate 1 run). It was the first time in over 2 years that I spent a significant amount of time not actively developing anything, which felt both weird and necessary.


Personal studies

Despite being on vacation, I did not stop my personal studies habit. I actually started three new books this month:

For career studies, I picked up Fundamentals Of Software Architecture. I have been working on several personal projects for a while now, in addition to everything I have done professionally, so I thought it was well past the time to start getting some grounding on software architecture. I am mostly interested in evolutionary architecture (for a plethora of reasons that there is no need to get into here), but I felt like I could use some more basic foundation first.

For hobby studies, I picked up Rich Dad, Poor Dad. I should have started digging into a good financial education ages ago, and it has been well over 4 years that I have been postponing it until I felt financially stable enough to be able to benefit from it. No more procrastinating here!

For technical studies, I picked up AI Engineering. It has been a little over 1 year that I have started focusing on AI Engineering in my side projects, and I am also very interested in this field as a whole. I felt so inclined to get a stronger grounding on it that I added a third type of studies to the usual 2 (career and hobby). I am just hoping that it does not get too overwhelming, but it really feels like the right time to get more serious about this stuff.


Downpoints

Not much coding

I do not have many downpoints to mention about July. Only that, since I have turned off from the software development world for this period, it did feel particularly unproductive. Not writing much code gave me time to do other things I am passionate about, and I did enjoy it, but the itch to go back is very strong.


Plans for next month

In August I will be starting in a new area at the company. I am very excited about its goal! I expect to be fully invested in getting acclimated to it for a few months.

Because of that, I will probably spend the major part of August, if not all of it, learning the base stack of this new area, and adjusting to this new period. It should be an incredible ride!


Thursday, July 31, 2025

Monthly Recap - 2025-06 June

June was a very atypical month, mostly driven by transitions. In my personal work I moved on from the previous set of projects with AI engineering onto exploring other ideas, and in my career I went through the process to join another team inside my company, which is much more aligned with my current vision and goals. Here's a quick summary.


Achievements


LLP version 1.1.1 Release 

Local Language Practice (LLP) is a desktop application to practice languages through contextual chat roleplay with local AI models. I first released this project back in April. A part of my First Steps Into AI Engineering series, I consider it my main software project of the first half of 2025. In June, I released version 1.1.1, which fixes a bug that prevented it from being launched from the JAR file and added more sanitization cases for the messages. I also took the opportunity to update the Readme, making gemma3 now the default model in the examples instead of llama3.1.

With regards to software development, I have been taking some time off from major public releases, as I am focusing more on exploring some ideas on my own. I expect this to last a few months, and I will probably only go back to have major public development in the last quarter of 2025. I am very happy to have this pause, as I need some time to just test out several approaches and see what works and what does not, without any commitment to sticking with the code. I am mostly still staying in the generative AI enabled applications space, though.


Personal studies

I was able to continue and make great progress in my personal studies habit. For career studies, I finished Building Successful Communities Of Practice, which offered great insight into an area that I have been more and more involved with in the recent past. And for hobby studies, I finished The Prompt Report, a nice overview and catalogue of techniques for prompting generative AI models - I have been following the AI field with great interest since 2023, and this was a great material to shine some light in the topic and learn a few new tricks.


Blog post about LicLacMoe

Although I released LicLacMoe (a simple tic-tac-toe game in which you play matches against LLMs without any conversational interface) back in early May, I only got to writing the blog post about it in June. That shows how busy and chaotic my routine has been. Better late than never, I guess!


Internal move

As mentioned at the start of the post, during June I went through the process of applying and being accepted at a different team inside my current company. I am very, very happy about this change, as it allows me to work more aligned to the global vision of the company and on internal developer experience tools. I expect to gain a lot of knowledge and insights about how to build great tools for developers out of this experience, but we will see how it goes.


Downpoints


TDC with little participation

The second edition of The Developers Conference (TDC) took place in June. Although I was very optimistic about it, mainly because it was a full-fledge version (instead of the "AI Summit" format, which I have already complained about in a previous post), I was so involved in my daily activities at work (the entire interviewing process also taking its toll, of course) that I was not able to attend many sessions. I still plan to go over the recordings in the near future.

The session that I felt was most productive for me was one about the product development process for a Brazilian company that develops video games. I am very intrigued at the prevalence of waterfall-like processes and cycles in that industry (AAA games often taking half a decade or more before ever making it to the players hands), and this has been in my mind for quite a while now. It might be something I will look into further in the future, as someone who both loves games and believes deeply in Agile for any kind of creation.


Internal move in effect only in August

Another negative point was that, although I had my movement to the new team at my company approved, it could only be in effect from August onwards. Which meant that I had to wait for about a month and a half from receiving the approval to actually moving. Definitely a longer waiting time that I would have liked, but, once again, better late than never!


Plans for next month


Vacations

I will spend pretty much all of July in vacation. Since I had a lot of available days piled up (which HR would soon kindly ask me to use), and it would not make much sense for me to start new things considering I would move teams soon, I decided to take several weeks off and start fully charged at the new position. I intend to dedicate most of this time to gaming (I am horribly late on my backlog of games, to be honest) and music, though I never really stop studying and coding in my free time.


Next portfolio project

Speaking of coding, I plan to start working on my next portfolio project as well. I expect to go several months purely exploring ideas and building little sketches for independent parts of it before starting the "official" project, so it will definitely not be anything public for now. This new project should have roughly the same size of my last one (LLP), and will also be focused on applying LLM models to a specific problem (I already have the scope and initial set of features in mind). I expect it to be quite fun, and very useful to myself - while hopefully being insightful for anyone who is also interested in generative AI as an enabler for our future activities. I will share more about it in the near future.


Monday, June 30, 2025

Monthly Recap - 2025-05 May

May was a very intense and productive, if not always positive, month. I finished a series of project that took me almost a year, was able to restart some of my habits and had some vacations (with mixed results). Here's a summary of what happened.


Achievements

LicLacMoe

LicLacMoe is a desktop application to play tic-tac-toe against a local LLM model. It is the fourth and final project in my First Steps Into AI Engineering series. I have written a blog post about it, Project: LicLacMoe. This was a much more laid back and casual project than the one before it (Local Language Practice, LLP), and I both had fun and achieved some cool insights while developing it. These were its releases in May:

  • 1.0.0: Initial version, containing the basics to play matches.
  • 1.0.1: AI player chooses move asynchronously.
  • 1.1.0: Support for reasoning models and verbose mode to log full response from model.


Finished First Steps Into AI Engineering series

With the release of LicLacMoe, I finish the scope I had in mind for my First Steps Into AI Engineering series. I started planning and implementing this series way back in August of 2024, so it took me almost an year to finish it. That was somewhat surprising, I initially thought it would take only a few months, around 4 or so. Despite taking longer than expected, it was a thoroughly enlightening project, and I enjoyed each part of it. I feel like it gave me a much stronger grounding on working with generative AI models, and paved the way for much more ambitious projects in the future.

In this series, I focused on writing most of the code myself, while avoiding popular frameworks focused on interacting with generative AI models. Now that the goal was accomplished, I feel comfortable to start using frameworks, without feeling like I am too dependent on them, or treat them as a magic black box.


Started studying SpringAI

As a result of what I just mentioned, I started exploring one of the popular frameworks for generative AI development: SpringAI. It is part of an ecosystem that I am very familiar with, so it seems like a logical next step. I will experiment and try to create a few projects with it, before also trying out other Java frameworks and the ecosystem of other programming languages.

Going forward, I expect to shift my time balance to once again invest more time in studying courses and less time on building projects. In the past, I leaned completely towards studying courses and almost never built anything by myself, while since around the first months of 2024 I shifted to exclusively invest my time in building stuff - now it is the time to balance both things at the same time. I know for sure I will not stop building new projects, as I have a huge backlog of ideas (even if I get no more inspiration for the next 5 years, I think I should have enough to not stop working on them).


Restarted personal studies

Another nice point of the month was getting back on track with my personal studies habit. I had paused my career studies in April, and had not done any hobby studies since December of last year. In May, I was able to pick both back up.

I picked some very short books and papers, which allowed me to move fast and achieve some accomplishments quickly. This was a very good morale boost!

For hobby studies, I started and finished the paper Future Brains (purely for intellectual curiosity), while for career studies I started the book Building Successful Communities Of Practice, which was useful for my current job.


Downpoints

I had almost no down points in my personal space. Everything went smoothly, I was able to both finish long standing projects and restart some of the things I enjoy.

However, professionally it was a challenging time. Even though I took some days off, I ended up having to work on a few of them, and in general several changes that happened were done in way that I disagree with, which gave me some frustration. Just a reminder that not everyday is a perfect day.


Plans for next month

TDC

June will have the second edition of The Developers Conference in 2025. While I have been less motivated with the event since its change to the "AI Summit" format, this one should be good as it will be the first one of the year in the full, 3-day long, several-tracks format. It will happen in Florianópolis, and so I plan to attend it remotely.


Deeper exploration of AI Engineering development

With the First Steps Into AI Engineering series complete, in June I plan to start going deeper into some AI Engineering projects and ideas. I don't have anything I can share right now, but I have plenty that I expect to get done along the year, and I will write about it as I finish each project.


Project: LicLacMoe

Description

LicLacMoe is a desktop application that allows you to play tic-tac-toe against local Large Language Models (LLMs).

I released the first official version in May 4th, 2025. The name, of course, is a word play with tic-tac-toe, replacing each first letter with the initials of "large language model".

The application assumes you have an LLM server running on your machine (this is a deliberate choice), by default on port 8080 but that is configurable. It presents the player with a visual tic-tac-toe grid that can be used for playing - once the player makes their move, a call to the local LLM server is made with the current state of the match, so that the LLM can pick the next move of the AI opponent. The entire interaction with the LLM is through playing, with no conversational interface.

I developed LicLacMoe as a way to explore using LLMs in a way that does not involve any conversation between the user and the AI system. Chatbots have become almost synonymous with LLMs, in large part due to how they were popularized, so it was an interesting experiment to use them in a completely different manner.


Context

I developed LicLacMoe as I was exploring how to build systems using generative AI technologies for the first time. I blogged about the set of 4 projects that came out of this exercise in my post entitled "First Steps Into AI Engineering". LicLacMoe was the fourth and last of these, and the most purely exploratory one.

As described in the post just mentioned, for LicLacMoe I wanted to write most of the code myself, without relying too much on frameworks, so as to get a better feeling about working with these models. It is also intentional to explore only open and local AI models, as it is my intention to find out how far can we go working only with models that can be fully personal and owned by its users.


Highlights

Interacting with LLMs without chatting

LLMs have caught our full attention due to their uncanny ability to behave like a human being in a conversation. However, the big question on everyone's minds was if the models actually have some degree of reasoning intelligence, or if they are just really good at reproducing our patterns of communication (of course, it must be mentioned the obvious philosophical question: "could it be that there is no difference?"). To a certain degree, the appearance of reasoning models, and the current trend of agentic AI, have shown that LLMs can definitely be exploited for some amount of reasoning intelligence, but at large scales the question still remains. My first intent with creating an application that uses LLMs without any chat interface was to see how it would feel to use LLMs purely as a source of thinking, without any verbal communication. While the long time it takes for it to generate an answer can be a bit frustrating, overall it was a positive experience - it is a really, really weird way of interacting with a computer system.

My second intent was to just get used to incorporating LLM answers in a bigger system, as part of the User Interface. I honestly think chatting (especially when you have to type long messages) is not the best interface for any complex computer system, far from that. In order to make full use of the potential that generative AI systems have, we must learn to incorporate them seamlessly into our flows, and that includes into our computerized applications. This was just a first step in this direction, I have several other ideas I want to explore further with regards to this.


Not needing to code game rules and strategy

As mentioned previously, using LLMs for pure intelligence is a really weird experience. One of the weirdest parts was that, in order to implement LicLacMoe, I did not have to implement a strategy that knew the rules of tic-tac-toe at all. I still implemented the logic of the game in order to verify the end result of matches, but I think with a little more development time I could have even replaced that with well-crafted prompts.

I am sure that this was in big part due to tic-tac-toe being an extremely simple and popular game. It is reasonable to assume that most (if not all) models will have seen enough examples of matches and descriptions of the game to be have memorized a pretty good understanding of how to play it. The same would most likely not be the case for more complex games - I find it very interesting to think about how complex of a game is it possible to teach LLMs simply by feeding it enough cases.

Regardless, it felt very odd to rely on a system that "just knew" the rules, and to which I could just feed the current state of the match and it would produce a next move. Of course, it would not always be a valid move (error handling and retry policies were more essential in here than in any other LLM-based system I have implemented so far), nor a particularly brilliant one. But even small models would consistently give something workable in a reasonable amount of time (and retries).


Reasoning vs non-reasoning models

This leads into the final interesting note. While testing the application, I found that non-reasoning models would mostly generate moves that looked a bit random, and could very easily be defeated. I had to make some changes to the logic of parsing the answer from the LLM to support using reasoning models - however, changing to these models drastically improved the performance of the AI player. I tested it with Qwen 3, 8B parameters, 8 bit quantization - a rather small model as far as LLMs go. In comparison, the non-reasoning model I used was Gemma 3, 27B parameters, 8 bit quantization, a model more than 3 times the size. While I have never been a huge fan of reasoning models (to my common use cases they usually don't offer too much improvement, and are considerably slower), in this particular case it was easy to see the value that such models bring.


Future Expansions

Benchmark of performances

As mentioned before, while testing the application I used a non-reasoning 27B parameters models (which had bad performance) and an 8B reasoning model (with significantly better performance). One thing I would like to do, if I ever have the time to, is make a more comprehensive list of the performance for several models of different families and sizes. I would be especially interested in seeing how small in size we could go with a reasoning model and still have it able to avoid defeat in most matches. I would be pleasantly surprised if this is possible with a model smaller than 4B.


Induce reasoning for non-reasoning models

Another interesting exploration would be to craft the base prompt so that even non-reasoning models would think about the current state first before choosing a move. This is easily done with very popular techniques to force step-by-step thinking. It would involve changing the base prompt and possibly the parsing of the response as well. This could then be compared with the improvement in performance gained when switching to a reasoning model, to see if the training that these models receive in reasoning actually gives them an advantage or not.


Model vs model

Finally, the last extension I might make is to change the game to support AI vs AI mode, with two LLMs playing against each other. This could then allow tournaments to be played, and metrics to be gathered as to which models performs better against each other. It would be a nice and fun addition, but it probably won't be a priority any time soon for me.


Setup

Although LicLacMoe is not one of my portfolio projects (which have a fixed set of quality standards I expect to maintain through their entire lifecycle), I did configure most of the foundations I use for those.

I use Github Actions to generate a new release for LicLacMoe whenever new code is pushed into the main branch and alters relevant core files of the project.

I have a changelog file, a file with guidelines about contributing and an architecture.md file (an idea I adapted from this great article), beyond the usual readme file as documentation.

As this project was done in an exploratory and proof-of-concept approach, I did not include automated tests. This is the main point of departure with regards to the quality standards I expect from my portfolio projects.


Links

Source code: Github

Executable: Releases


Saturday, May 31, 2025

Monthly Recap - 2025-04 April

April was a very busy month with a lot of big things going on, both in my professional life and in personal projects. Especially in my daily job, I was finally able to achieve significant milestones of which I am very proud. Although I did not have much time for studies, I also accomplished some important things in my coding projects. Here is a short summary.


Achievements

Open Source Champion

Beginning late last year, I am a local Open Source Champion in my company. Although I had a lot of onboarding to do, and it took a long time to pick up traction in the position (in great part because it is only a side activity in my job, expected to take only about 10% of my working time), this month it was finally announced that both me and another colleague would act as champions of this topic in our location, through official channels.

Open Source is a topic that I have always been very passionate about - from the very first time I joined my current company (back in 2018), I fought to stay in the "open" part of it, and to incentivize the usage of open software whenever possible. When I left the company (December of 2020), this was still a difficult position, but when returning (June 2024), I found a much more mature and progressive mindset in the company. So it was a huge pleasure, and a validation of so many years of hard work, to be able to get into this role. As an Open Source champion, my goal is to foster the usage of open source technology, clarify any questions about the company's guidelines and policies about the topic, and to help in building a strong community dedicated to this in my location.

The official announcement could not have been better: we announced it while delivering a talk about the topic in the local instance of the largest internal conference focused on development topics in the company!


Innoweeks

April saw the start of the yearly edition of an innovation event that I am very fond of: the Innoweeks. I had participated twice in this event, in 2019 and in 2024. In both occasions, I was able to meet and work with brilliant colleagues from different areas of the company, which was an amazing boost to my own self-development. I always participated in the developer role.

In this year's edition, I will be participating as a dev lead, a new role which carries much more responsibility from the one I am used to. This will be an exciting opportunity to be more involved with the high level decisions of the project, and to help steer the development team into their best version.


LLP releases

Local Language Practice (LLP) is a desktop application to practice languages through a roleplay conversation with a local LLM model. It is the third project in my First Steps Into AI Engineering series. I have written a blog post about it, Project: Local Language Practice (LLP). I had been working on it since January, and in April I was finally able to make its first release. These were the releases of LLP in April:

  • 1.0.0: Initial version, containing roleplay chat with a default scene and translation feature.
  • 1.1.0: Added feature to load custom practice scenes.


Next First Steps Into AI Engineering project

After releasing LLP, I was able to start working on the fourth and final project of the First Step Into AI Engineering series. I have been working on this series since May of last year, so it is very exciting to be so close to finishing all I had planned for it!

I plan to release this final project in May, and will give more details soon.


Downpoints

No progress on personal studies

I have long been unable to do much progress on my hobby personal studies. In April, unfortunately I was also unable to do any progress on my career personal studies.

Although it is sad to not be able to progress on anything in my personal studies habit, this time it really is for a good cause: the Open Source presentation I mentioned earlier, together with the Innoweeks work and personal projects development has filled all of my (pseudo-)free time. I expect to be able to get back to this in May.


Plans for next month

Finish First Steps Into AI Engineering series

In May, I expect to finish the final project I have for the First Steps Into AI Engineering series, which I started in April. This will be a truly major milestone for me, as I have been working on this series for about a year now and have learned an immense amount from it!

By finishing the series, I will switch to invest more time in studying what other people have been putting out in terms of content about generative AI and developing with it. Now that a few years have gone by since the initial boom of the field, there has been enough time for us as a group to get a grip on it and start organizing our thoughts and practices a little bit, so I am excited to see how other people have been seeing this.


Innoweeks

May will be a very busy month (as usual), especially due to the Innoweeks final stretch. I fully expect to spend several nights working feverishly in new features and bug fixing for the project, in order to have a good and fully functional live demo for the final presentation.


Restart personal habits

Finally, I intend to get back to full steam on my personal habits in May. After a very, very tough, busy and exhausting stretch since the start of this year (mainly from my day job activities, with the added factor of Innoweeks), I have a couple of weeks of vacations scheduled. I expect to invest the first few days in full recovery mode, resting and sleeping a whole damn lot! But then I expect to have at least one full week to get back to my normal routine, returning to habits (such as book studies, gym, etc.) that I have had to leave in the background for a while this year.

While I have learned and achieved a lot since January, I really feel it is time to get back to a healthy and sustainable routine. The second part of May will be fully dedicated to reconnecting with myself and the life I chose to live.

Project: LLP (Local Language Practice)

Description

Local Language Practice (LLP) is a desktop application to practice languages through a chat roleplay with local Large Language Models (LLMs).

I wrote a sketch version of it around May of 2024, and released an official version in April 16th, 2025.

The application assumes you have an LLM server running on your machine (this is a deliberate choice), by default on port 8080 but that is configurable. There are two main features in the application, both present in the main view: a chat between two characters (the human user acting as one, with the LLM acting as the other) in a particular language and widgets for translating to-and-from between that language and English. The default scene is a conversation between two robots in a futuristic world, but custom scenes can also be imported to be played out. As of the first released version, the supported languages are French, German, Portuguese and Spanish.

I developed LLP as a tool for personal usage, following closely my particular flow for practicing languages. Although it is not intended for use by a general audience, I expected it to be functional and useful from the start, and not merely a proof of concept.


Context

I developed LLP as I was exploring how to build systems using generative AI technologies for the first time. I blogged about the set of 4 projects that came out of this exercise in my post entitled "First Steps Into AI Engineering". LLP was the third of these 4, and the first to work with more than one chat context.

As described in the post just mentioned, for LLP I wanted to write most of the code myself, without relying too much on frameworks, so as to get a better feeling about working with these models. It is also intentional to explore only open and local AI models, as it is my intention to find out how far can we go working only with models that can be fully personal and owned by its users.


Highlights

LLP was the first of the First Steps Into AI Engineering projects that I originally intended as an useful application. While JenAI eventually became my main way of interacting with LLMs, at first it was going to be just a proof of concept to learn how to handle conversation state - LLP, on the contrary, was planned from the start to automate one of my personal flows for learning languages. This made the entire development process very satisfying, as with each new part implemented I was able to really see the benefits it added. It also made working with user stories and a project board to track development very intuitive and fruitful. While previously I have used LLMs to generate user stories (SnakeJS in particular), this time I found it more efficient to just write them myself, as I had a very good idea of everything I wanted in the application and how to communicate that.

The flow I wanted to automate with the application is the one I use to bridge the gap between self-contained lessons and real-world usage, when learning languages. While I appreciate and enjoy doing lessons, both in applications such as Duolingo and in more traditional textbook format, I find that by themselves they do not really prepare one to actually use the language being studied in real scenarios. So I tend to complement them with other forms of practicing the language, such as listening to music with lyrics in that language, or reading books written in it. More recently, with LLMs becoming really powerful and popular, I started using the frontier models (as they tend to be much more reliable for multilanguage usage) to simulate conversation in the languages I study, as a way to get a more immersive practice - it has proved to be quite effective. While the conversation is the biggest part of this practice, I find it useful to also have separate tabs open with translation tools, in which I can paste any part of the conversation which I do not feel confident about (either in understanding or in creating) and immediately get a translation. I designed LLP so that I could have both these things in the same view, so that I would never need to leave it for the entire practice session.

A delightful challenge in this project was crafting the prompts to get the LLM to behave exactly as expected for each process. While the conversation one was the closest to what I had done in the past, especially with JenAI, it also had the nuance of needing to stay in the specific language being practiced, and of staying coherent with the AI character while also understanding it should act as a tool for language practice. It took some iterations to get the conversation prompts right, but a combination of a system prompt setting up the scene and the task, and a new system prompt (injected and removed from the conversation history as needed) right before each response did the trick reasonably well. The prompt for the translation process had to be more specific and less open ended than the conversation one, and I was able to leverage my learnings from working on Chargen when crafting it. One of the tricks I have been finding very useful to get LLM models to generate outputs in a specific format, even when they were not specifically trained for that, is to just tell them in the prompt that their response will be piped directly to an automated system that will process it, without any human being involved - this alone seems to prevent most models from adding unnecessary explanations or deviating from the specified format, in my experience.

I am very fond of the feature for loading custom scenes, as it provides a way to explore unique settings without needing to change the source code and generating a new release. Since I consider myself the only audience for the application for now, I am happy to keep it based on manually customizing JSON files, instead of creating a separate custom creator app or feature inside LLP for this - it might change in the future, if I decide to make a more widely appealing version of the application.


Future Expansions

Saving session state

The main feature I am considering to add is the ability of saving practice sessions with the conversation history to be loaded later. The application already has the capability to load custom scenes, but they always start the conversation from scratch. Being able to save a conversation to continue at a later time should be very useful (as I have already seen with JenAI). I am not sure when I will have more time to work on this, though.

Using industry standard request framework

Another change I would like to do is replacing the custom logic to interact with the LLM server, which I have been reusing and slightly tweaking and improving between each project of this series, with a more industry standard one. While the classes I have been using have been great for me as a learning tool, and have not so far caused any major issues, for LLP in specific they are problematic: they constantly break when using any slightly more complex character (such as accented versions of vowels), which led me to create helper sanitizing functions to just remove any problematic character and replace it with a safer version whenever possible. For English-only conversations, this approach works - but for practicing other languages it messes up the specifics of the language quite bad. Most non-English languages have a heavy usage of non-ASCII characters, and learning them with an application that just ignores these characters is obviously a bad idea. I am looking for alternatives, and in special for Spring AI as a good replacement to enhance this.

Adding new languages

Adding new languages is also something I will quite probably do in the future, mostly whenever I decide to pick up another language to learn. I have decided to hard code the supported languages in the application, and I do not intend to change this in the future. Given that I consider myself the only audience for the app, coding a new one whenever I need to is a much smaller hassle than making the list dynamic in the first place.

Generally usable version with another framework

Of all projects in the First Steps Into AI Engineering series, I see LLP as the most potentially useful for a large audience. However, I do not think that an application built on top of Swing is viable for a modern public, in 2025. So, one of the things I might do in the future (although I am not currently seeing this as a priority) is to create a new application with the same features as LLP, but coded in a more modern framework (probably with a different, more frontend-friendly language as well). This would give me an opportunity to recreate several parts of the application in a more abstract and dynamic manner.

Custom scene creator

Finally, the last expansion that I imagine for LLP would be a system (either in-app or as a separate project) to create custom scenes. Currently, this depends on manually editing JSON files - which is perfectly fine for me, but would be irritating for anyone else. If I decide to create a version for general use, this will definitely be in scope, but for now I do not have plans to implement it.


Setup

LLP was the first of the First Steps Into AI Engineering projects that I wrote from the start with the intention of using as a portfolio project. Although I started this series as a space for exploration and learning, over time the projects acquired a level of scope and complexity that made me elevate them into more serious projects (although still not intended for a general audience). Due to this, I configured all of the foundations I use for portfolio projects.

I use Github Actions to generate a new release for LLP whenever new code is pushed into the main branch and alters relevant core files of the project.

I have a changelog file, a file with guidelines about contributing and an architecture.md file (an idea I adapted from this great article), beyond the usual readme file as documentation.

I also included automated tests for everything except the UI code. As of version 1.1.0, all non-UI packages had more than 75% method test coverage. I made extensive use of Test-Driven Development (TDD) for the development of the project.


Links

Source code: Github

Executable: Releases


Thursday, May 1, 2025

Monthly Recap - 2025-03 March

March came with a slight improvement in my work on personal projects. Although I still had to work overtime several days, and had to skip working on personal projects due to that, I was able to get more done this month. Here is a short summary.


Achievements

JenAI

JenAI is a CLI frontend for chatting with local LLMs that I have been developing (and using) since last year (I wrote about it in a previous blog entry). Every now and then a new model comes up that requires changes, usually because it very heavily uses a problematic character or special token that needs to have special treatment to not break requests. That was the case with Google's gemma3, which is actually one of the best open and local models of the current generation.

In order to support gemma3, I released two new version of JenAI in March: 1.7.2 and 1.7.3, both with some minor adjustments in the sanitization logic to prevent gemma3's quirks breaking everything.

Despite being a very simple application, and not being targeted at anyone else but myself, JenAI has become my go-to choice for daily interaction with LLMs. The beauty of building your own tools!


Finished The Captain Class

Since the start of this year, I had been working through the book The Captain Class as part of my personal studies habit. I was able to finish the book in March.

The Captain Class turned out to be a great food for thought about leadership. Although I do not agree one hundred percent with everything the book says, I found even the parts that I disagreed with, or that did not seem totally accurate, to be very intriguing and to lead to some deep thoughts about the topic. I saw a strong correlation between the Captain ideal the book talks about (mostly derived out of analysis of sports teams) with the professional role of the Scrum Master.

I would like to write a full review of the book in the near future, though I am unsure how soon I will have time to work on this.


Downpoints

No progress on hobby studies

March was yet another month in which I wasn't able to put any time in a hobby book, for the personal studies habit. Since I finished the last one, in December, I have not yet even chosen the next. Hopefully around the middle of the year I will be able to organize my time, have less extra hours of work, and then pick things up again for this, but for now it remains an unfortunately paused part of my routine.


Plans for next month

Innoweeks

April will have the start of the Innoweeks event. This is a hackaton-style event that my company holds every year, in which employees get together with customers to come up with an innovative solution to a real problem in only a few weeks.

I participated in the 2019 and 2024 editions, and I will participate again this year. Now, as a dev lead, which will be a new experience for me. I am very excited for this!


Finish third project on First Steps Into AI Engineering series

I have been working for a while now on the third project for my First Steps Into AI Engineering series, and during March I was able to make good progress but not reach a state in which I felt comfortable releasing an initial version of the project. This should be feasible in April.


Start last project of First Steps Into AI Engineering series

After releasing the third project of the previously mentioned series, I intend to immediately start working on the fourth and last project. It should be a simple one, but it is one that I have not yet sketched out (the other 3 already had a functional sketch I could use as a basis while developing). I will share more details once I have released it, but I don't expect to spend more than 3-4 weeks in the development of this one.


Monthly Recap - 2025-07 - July

July was completely a vacation month. I took a month off of work before switching areas inside the company, and used this opportunity to als...