Skip to content

Blog

2025 in Review

In 2024 and early 2025, we faced the departure of four well-regarded colleagues. It was a lot to absorb, and we started out 2025 with a good amount of uncertainty. There were fewer engineers per project and less review bandwidth to go around. We also lost a lot of accumulated expertise. How would we establish ourselves moving forward?

We did not want to compromise on the quality or consistency of our output, so we focused on leverage. For us, that meant using standardization and automated tooling to amplify our work. We also started an exciting open source initiative, which acted as a big morale boost.

a smooth and consistent path is easier to walk than a rough and chaotic one

Standardization was a big focus of 2025. Each project did project management differently, and there was no clarity for what to use for new projects. We also had a generally low bar for documentation across most of our projects.

In 2025, we made great progress in standardizing our project management practices. We have developed a set of best practices through trial-and-error across different projects the past few years, but they had only been shared through ad-hoc communication between engineers. It was time to document the best practices and apply them consistently, so every active project could benefit from a more efficient workflow. This improved our workflow on existing projects and made it much easier to get started with new projects. Before, there was uncertainty on how to configure each setting for a new project, leading to decision fatigue and wasted time. Now we have a pre-set package of settings we have discussed, tested, and know work well for us, allowing us to configure a new repo within minutes.

Of course, every project has its own unique constraints, requirements, and team. We don’t have a rigid system that we treat as law. Instead, we have developed a set of general project management best practices including things like: how to label issues, how to merge changes, how to handle branching, and how to handle deployments.

To make getting a new repository off the ground very easy, we created internal checklists for both GitLab and GitHub. These cover things like: branch protection rules, repository settings that we change from the defaults, and setting up integrations like the code review bot. We also created a couple helpful repository templates for our GitHub projects. One focused on docs repositories - docs-template, and one for any new open source repository - basic-template.

We open sourced the issue-bot, our internal tool for helping us manage issues by reminding us of things like missing labels or incorrectly formatted issue titles. Currently, it only is available for GitLab, but in the future we would like to add support for GitHub. We are also interested in adding support for GitLab Statuses (replacing our current practice of using scoped status labels) and finding interesting use cases for GitLab Custom Fields.

Going forward, I’m excited to experiment with using the Specture System for some of our projects. Specture is based on some basic strategies I’ve found to work well with AI agents after some experimentation in the past few months. The basic idea is a spec-driven approach, where designs are documented in the git repo in simple markdown files, rather than scattered across GitLab or GitHub issue descriptions and comments. It seems like it will work best for projects with very small teams and a lot of AI agent usage, which is a good match for many of our projects.

Another area of focus for us in standardization was in raising the bar with our documentation practices. We embraced the Astro framework early on. We found it to be a great way to write documentation in Markdown and quickly deploy a static site.

Our current strategy is to have a docs directory in each new software project’s repository for storing the Astro docs for that project. We’ve found that it works really well to have the documentation for a project in the same repository as the code. The goal is that every time a pull request includes an important update or new feature, that same pull request would include the corresponding documentation update.

We also created GitHub repositories dedicated to documentation for several of our hardware products (for example, neuraplex.dev and mconn.mrs-electronics.dev). Our docs-template repo serves as the starting point for these repositories.

We deployed mrs-electronics.dev as the home for our public-facing developer content. We use subdomains for the docs of our different projects, for example: qt.mrs-electronics.dev and mconn.mrs-electronics.dev.

In 2025, we made a lot of good progress in standardizing and establishing our documentation practices within the software development team. However, we still have a lot of work to do. Establishing documentation is one thing, but keeping it up to date is another. We also hope to share some of the things we’ve learned about writing and deploying good documentation with others at MRS outside our team.

shortens feedback loops, so developers move faster with confidence

From the start, we saw the need to scale our team’s capabilities. Automated tooling is a powerful tool for leverage. We can automate the tedious and time-consuming tasks so we can focus on creative and high-impact work.

The following three sections cover three different feedback loops commonly encountered in software development: integration, local development, and implementation. We’ve found interesting ways to apply automated tooling in all three loops to shorten the cycle for each.

It’s important to reduce the feedback loop for the integration cycle. You can have developers producing all kinds of great code, but if you don’t have a good system for reviewing and merging new code quickly and efficiently things will quickly get backed up.

Early in the year, we did some team training on Docker and containerization. This discussion led to much more widespread usage of Docker containers and CI/CD pipelines across our projects. Our CI/CD pipelines protect us from all kinds of mistakes. We run linters, formatting checks, and automated test suites on every commit to most of our active projects.

Another place that CI/CD can have a big impact in is deployment processes. Our web projects and docs sites have automated pipelines for every commit to main, and we have pipelines that automate the time-consuming process of building APK and AAB files for each new release in our Android projects.

We also started work on our Code Review Bot. This runs on each new pull request for most of our projects. It allows us to shorten the code review feedback loop - a human review might not be available for several hours or days, but the code review bot can give basic feedback within a few minutes. It’s not perfect, and we have a lot of ideas for how to improve it in 2026, but it caught many silly mistakes for us in 2025.

High quality tooling for local development is essential for rapid iteration. We don’t want to have to rely on manual human checks for all our work. It is much quicker to have automated tooling that can check our work before each commit and push.

One essential piece that we’ve begun introducing to all our projects is just. It allows us to have a self-documenting place for configuring all the common commands for a project. This is very useful for enabling new developers to get started quickly with a project - they list the just recipes and find what they need.

Most of our projects have a lint recipe and format recipe in their justfile. These basic tools are essential for developing consistent code as a team. There is no reason to argue about code formatting - just use what your formatter produces.

It is also very convenient if developers don’t have to remember to run the linter and formatter themselves. We have found the pre-commit framework invaluable for configuring Git hooks. It can run the linter and formatter on staged files for every new commit, and also check for things like trailing whitespace and unwanted large files.

As a learning experiment, we started using Go for a few projects (time-tracker and mrs-sdk-manager being two notable examples). We found the superb built-in tooling to be a breath of fresh air compared to what we are used to in older languages like C++ or Python. Go has a built-in formatter, testing framework, and package manager, which makes it very easy for an inexperienced developer to get started with new projects without getting bogged down in a complicated ecosystem.

And now we get to the inner loop of software development. How do you take an idea from your brain to code? The past year or two has seen the rise of a brand-new way to convert ideas to code much faster - LLM-powered coding agents.

We’ve found coding agents to be helpful in many ways. They allow us to quickly prototype new ideas and explore new possibilities. Tedious refactors and writing boilerplate or glue code takes much less time. They also can be a great help in debugging tricky errors.

A great side benefit of embracing coding agents is that they thrive off of many of the same things as human developers - high quality documents, standardized development tooling, and good test coverage. When we invest in these things to help reduce the number of mistakes our agents make, we also make life better for ourselves. There is no excuse to have poor test coverage when a coding agent can quickly write you a bunch of test cases.

We tried out several different coding agents, adapting as new and better tools hit the market. Our first experiments were with Aider. It was a great introduction to having an agent with direct access to your local filesystem, but it was a bit tedious to have to manually introduce each new file to the agent. OpenCode was our next tool of choice. It is a great open source TUI for coding with LLMs. Tools like grep and bash commands really streamline the experience as compared to Aider. Amp is our current favorite. It likes to go through tokens quickly, but their ad-supported free mode allows a generous $10 of access per day. The main drawback is that it is proprietary and relies on their cloud servers, but it provides nice extras like shareable threads and workspaces. The main reason we like it is that it just seems to work. Amp seems to take the least amount of trial and error to get decent results.

OpenRouter was invaluable throughout the year. It provides an easy and effective way to access any model we want, based on the needs at hand. We used it for Aider, OpenCode, and our Code Review Bot. I like to think of the overall integration of LLMs through OpenRouter and coding agents like Aider or OpenCode as similar to a brain and a body. We can switch out the brain (the model requested from OpenRouter) based on what we need for the current task - a more expensive model like Claude Sonnet for a more challenging problem, and a cheaper model like Gemini Flash for simpler tasks. We can also switch out the body (the coding agent/LLM interface) as required - maybe OpenCode for implementing a new feature, and our Code Review Bot for reviewing the code.

It has been interesting to see how our coding habits adjust as we acclimate to using code agents regularly, and I’m sure we will continue to see lots of big improvements in the space in 2026, which will require further adjustments. One thing we have found very useful is to have a good AGENTS.md file in each repository. This is a good place to store LLM “memories”. After the coding agent makes a mistake, have it record the correct method of doing things in AGENTS.md. Most tools, including OpenCode and Amp, will automatically load AGENTS.md and this helps tune your agents in the correct way to operate on your projects.

a shared place for fixes and features

In 2025, we made our first steps into publishing open source software. Our open source projects provide us with a variety of benefits, including being a better way to distribute common shared code, and a good creative outlet for our developers.

Our biggest and most ambitious open source project so far is the MRS Qt SDK. We envision this to be the first of many SDKs, each focused on a different language and/or framework. The Qt SDK is targeted at the immediate need of our developers and our customers to have a solid foundation for new Qt applications for our embedded Linux hardware.

Our previous solution for shared code across different Qt projects was copy-and-paste between various repositories. This was far from ideal. Bug fixes and features would get introduced in one repository and never make it to other repositories. A centralized SDK should give us a single source of truth for Qt code optimized for working with our hardware. We can have a central place for applying bug fixes and new features, and then developers both internal and external can pull those improvements into their applications. It is a great way to reduce duplicate code across projects, and our improvements can have a larger impact as they multiply across projects.

We also started work on several helpful open source tools for improving our efficiency in our day-to-day work.

time-tracker is a simple app written in Go that provides both a CLI and TUI for quickly recording time entries. This is very useful for tracking the time we spend across all our different projects. We hope to introduce a web interface soon, which should make the app accessible from even more places.

bots is a collection of CI/CD tooling that we use across many of our projects. It currently consists of an Issue Management Bot and a Code Review Bot, and we plan to add more bots in the future to automate other parts of our software development process. Like the Qt SDK, the bots codebase is based on code that we had developed and copied-and-pasted across several projects. Having it put together in a central place with automatically built Docker images makes it much easier to maintain and distribute across projects. The bots are a great way to reduce time spent on tedious or time-consuming project management tasks, allowing our team to focus on high-impact work.

2025 was a year of big adjustments. We bore the grief of departing team members and faced uncertain prospects. We had to find creative ways to leverage the time and effort of our team to make an outsized impact. Standardization, automation, and shared open source codebases all helped to improve the effectiveness of our team, reduce inconsistencies between our projects, and shorten feedback loops. It was an exciting year of growth, and we look forward to finding more ways to continually improve our work in 2026!

Should I Really Use a Gitlab Wiki?

Recently, as we’ve been taking on new software projects and continuing to develop existing ones, the role of documentation has come into focus.

In fact, the development of this very site is a direct result of an increased focus by our whole software team to better document the things we learn and already know.

But…WHERE should all this information go? In the past, we always defaulted to Gitlab wikis…but is that the best option? Would an dedicated documentation repository for each project make more sense?

I will note that if your project uses a monorepo, I will think of “dedicated repo” as meaning “dedicated folder inside the repo”; something like apps/docs will work just fine and function the same as a separate repo in a multi-repo project structure.

The Gitlab wiki feature is really a very powerful one. Every group and repository is automatically given a wiki (unless it’s disabled in the project settings), so there’s no additional setup required.

Wikis are designed to be very quick and easy to contribute to. Internally, they are based on a Git repo, but you don’t have to clone the repo or worry about local copies or anything; Gitlab allows you to make and commit all your edits right from the UI and then does the Git operations behind the scenes.

Wikis support easy file uploads for images, PDFs, and whatever else in much the same way that other features do (like uploading files to an issue or MR description). You can attach the file and Gitlab automatically stores it for you with no extra configuration needed.

Gitlab wikis also have some negatives that come with them.

First, these automatic wikis are nice, but they can quickly result in splintered, spread out documentation. It’s very common for one “project” to consist of a group of multiple projects. If each one has its own wiki, including the top-level group, then which one should the project’s documentation go in?

This quickly becomes a problem when you have multiple developers adding things to different wikis. Not only is it likely that some of the more general information will be duplicated, which means unnecessary work, but it also becomes hard to know which wiki is the central source of truth. If two wikis have conflicting information, which version of the information should you choose as being true? You’ll have to ask someone else to confirm, which only serves to take more time for both of you.

Having multiple wikis requires you to remember which one has which pieces of information, which can quickly become a headache when trying to point other team members to something.

The only real way to avoid this problem is to establish from the start which wiki will be the central source of truth and disable the other wikis entirely.

Another disadvantage is a lack of peer review. Gitlab wikis are easy to contribute to…but that can also be a negative, because no outside approval is required to make changes. Anyone on the team can write down whatever they want and the only way for erroneous documentation to be discovered is if another team member stumbles across it and recognizes the error.

Gitlab wikis are also just that—Gitlab wikis. They’re tied to Gitlab. If your project is ever migrated to another code-hosting or issue-tracking site, such as Jira/Bitbucket or Github, then you’ll have to do some work to migrate the wiki with it. It’s fairly easy to clone the wiki’s internal repo and push that to the new site, but you’ll have to go through and update all the Gitlab-specific parts: for example, those lovely file uploads that I mentioned earlier? You’ll have to download each file, change the link that points to it, and figure out a simple way of storing those files in the repo.

There are a lot of advantages to using a dedicated separate repository for documentation. One of the biggest ones is peer review. By creating a separate repository (and setting up the typical workflow, with MRs, protected branches, required approvals, and the like), you force all new documentation to be reviewed by another member of the team.

There are a multitude of reasons why this is a good thing that boil down to the reasons why any peer review on code is a good thing. Reviewers can test any deployment/setup steps that are documented to make sure they work correctly; they can filter information that isn’t really worth documenting; they can suggest spots where more clarity or explanation might be needed; and so on. Code review is core to the functioning of any well-managed project, and documentation must be held to the same rigorous standard.

Another big, closely related advantage is real version control. The Gitlab wiki does use an internal repo, but when editing through the UI, it’s as if every change you make is a push straight to main (which, as any good developer knows, is NOT a good practice). I’ve seen multiple occasions where someone was trying to edit a wiki page at the same time as someone else and their changes conflicted.

With a dedicated repo, this abstraction of easy changes is removed, and that’s a good thing. Team members can check out their own branches to modify content and make sure all their edits are complete and concise. If someone is working on a big new feature, they can make a separate branch of the documentation where they document their new feature as they implement it. While this does require maintaining multiple MRs, we find that to be a small tradeoff for the benefits it brings.

Using a separate repo also simplifies the task of deployment to an external location. For example, if you want to use Starlight (hint: that’s what we used for this site!) to create an actual website for the project documentation, then it’s easy to do so. You can set up the Starlight project in the repository, configure the CI/CD deployments, and so on.

This becomes very important when you want customers to be able to see some or all of the documentation. Gitlab wikis have the same visibility as the project they are a part of; if the project is private (as the majority of ours are), then the wiki is also private. You can’t link to the wiki when talking to customers unless they are made a member of the project, which is its own can of worms…suffice to say it is a can of worms that SHOULD NOT be opened. So, for customers to get access to useful information, you’ll have to have some way of deploying it outside of the project.

Having a separate repo also gives you much more structural freedom. You can define the hierarchy of files however you want, use whatever file formats/types you want, and so forth. For example, in one of my projects, the documentation repo contains both an external site and an internal “wiki”. The external site is for things that we want the customer to be able to access, and the internal wiki is for more development-centric things: running tests, working with hardware, meeting notes, and the like. Having “docs” for external documentation and a “wiki” for internal documentation is a pattern that has served our team well across multiple projects.

One of the only real “cons” to a dedicated repo is that it takes slightly longer to make changes as compared to a Gitlab wiki, but this is only a con if you don’t care about review. We care about making sure documentation is correct and complete and thus much appreciate the slightly slower turnaround time for pushing new information to customers.

It can also be tedious to have to manage MRs in multiple repos; if you implement a new feature in the actual project repo, and then provide documentation for said feature in the docs repo, you now have two MRs to complete one feature.

However, this isn’t a big issue…wikis effectively require the same thing because you are still updating the documentation in a spot that is not the project repo. Plus, if your project uses a monorepo, then the problem is avoided entirely and in fact becomes even more streamlined because the documentation changes and feature changes can be reviewed in a single MR.

We have found using a dedicated separate repository for documentation to be far preferable to using a Gitlab wiki in the projects here at MRS. While this doesn’t mean that it’s the unquestioned best solution for every project, the majority of projects will likely benefit from this structure.

A dedicated repo is much more flexible in terms of deployment and structure, allows for real peer review, and avoids problems with splintered docs in more than one place and over-reliance on Gitlab. If you’re going to be creating a new project anytime soon, we recommend creating your documentation repo right from the start.

Lessons from 2024

2024 was quite the year for learning new things! It wasn’t very comfortable, but I definitely learned a lot.

but can produce great value when done well

The learning of this lesson took up the first eight months of the year for me. I was given most of the responsibility for merging the Spoke.Zone and Lenticul dev teams into one combined Spoke.Zone dev team. It was something I dreaded quite a bit at first, but with time I began to see the benefits. I’m really glad I went through this process. I think it was a good opportunity to grow as a leader in a way that really stretched me.

The Lenticul team and Spoke.Zone teams had very different processes in place before the merge, especially in regard to the development life cycle and release management.

A few examples:

  • The Lenticul team used Jira for issue management, while the Spoke.Zone team used GitLab.
  • The Lenticul team had a dedicated QA tester, while the Spoke.Zone team had very underdeveloped testing practices.
  • The Lenticul team had a large amount of staging environments, while the Spoke.Zone team only had one.

In situations like these, it is easy to think that your status quo is the best, and that the incoming team members should adjust to your processes. However, this is not the best strategy. It is much better (though it takes much more work) to carefully consider all parts of your processes and develop new team processes that build on the strengths of the process of both of the teams.

Solid communication is vital to any team. For a while, the Lenticul and Spoke.Zone teams worked together but apart. We hadn’t defined the communication channels very well, so we continued to operate as two independent teams. This created a lot of unnecessary delay and misunderstandings. Having shared communication channels is vital for integrating two teams into one team. Without good communication, a team will struggle to make forward progress.

My goal for the Spoke.Zone team has always been to develop an autonomous team, one that doesn’t require a lot of direct supervision. This requires a lot of trust among the team members. Building this trust takes quite a while. It is easy to get frustrated with colleagues that don’t seem to be working towards the team’s goals. In situations like these, it is important to do a self-evaluation. Do you have unreasonable expectations? Have the goals been communicated well? Have you done your best to communicate one-on-one with your colleague to resolve the misunderstandings?

A merger of two teams takes a lot of effort, and you don’t usually see the benefits right away.

It took me some time to realize some of these, but once I did it made all the work we put into the merger worth it.

This one is pretty simple. The more team members you have, the more combined experience you will have on the team. This means it is more likely that one of people on the team will have some experience with a new technology you are introducing to the team, allowing them to take the lead in implementing the improvements.

None of us are a complete and perfect package. We all have gaps in our skill set and knowledge base. The more team members you have, the easier it becomes to fill in each other’s gaps. This does require good communication to discover and resolve those gaps.

We all come from different backgrounds and have experiences with different things. It is possible for different perspectives to create conflicts. However, on a healthy team these diverse perspectives can be used to refine ideas, allowing you to develop much better designs.

We make our living by meeting our customers’ needs

Section titled “We make our living by meeting our customers’ needs”

not by producing engineering perfection

I am a serial perfectionist. It is quite easy for me to fall down the dangerous rabbit hole of perfecting and refining a feature than nobody ends up using. In 2024, I got the blessing of several deadlines that prevented me from falling down this rabbit hole. I also got the chance to interact more directly with some of our customers. This gave me the opportunity to hear things from their perspective, to learn about the problems they need solved. One of my goals going forward is to be much more intentional about seeking out the customers’ needs and prioritizing accordingly.

A perfect solution for the wrong problem is useless to the customer.

We should never take our colleagues for granted

Section titled “We should never take our colleagues for granted”

we might not work together next week

This was the most painful lesson to learn. Within the space of about four months, we lost three engineers from our software team. Also in the middle of that time, the Spoke.Zone team lost two engineers that were employed by our contractor, Supportronics. It really made me realize that often job satisfaction has less to do with what you are working on, and rather who you are working with.

It’s very easy to take your colleagues for granted, to get this mental image that they will just always be there. Of course, we know this isn’t true, but I realized that I often went about with this kind of mindset.

It is important to take time to appreciate your colleagues. Let them know that you value their work and the impact they have had on your life and career. They won’t be around for ever.

Growth doesn’t really come without some pain, but I do hope 2025 is a year of a bit less pain. Losing close colleagues can be a pretty painful thing, however it is also a chance to learn and re-evaluate. One thing is for certain, we can’t read the future! Hopefully our software team can grow stronger in 2025 with the lessons we learned in 2024!