Skip to content

Addison Emig

4 posts by Addison Emig

Guiding Your Agents

AGENTS.md files are a great way to persist long-term “memory” for your coding agents. However, it’s easy for them to become bloated and/or outdated. A recent study has shown this is actually worse than having no AGENTS.md at all.

There is no need to list your tech stack, file structure, or list of just recipes in your AGENTS.md. This is redundant and can be harmful to your agents’ effectiveness when this information becomes outdated. Modern coding agents can quickly figure out the essential details of your codebase for themselves. If you put these details in your AGENTS.md file, you are wasting context space and putting yourself at a high risk of documentation rot - when you update your tech stack or file structure, will you remember to also update AGENTS.md?

In our experience, the best usage of AGENTS.md is for recording guidelines that help your coding agents stay on the right path. Think of it as building some guardrails for your agents. When your agents make mistakes, be sure to update the guidelines to help point them in the right direction for future work.

We’ve found that the following three-tier system works well for organizing agent guidelines:

  • Always Do (no asking)
    • A list of things the agent is allowed to do (and must do) without asking.
  • Ask First (pause for approval)
    • A list of things the agent should pause and ask for permission for.
  • Never Do (hard stop)
    • A list of things the agent should never do.

Sometimes you need to record facts that don’t fit well into the three-tier guidelines system. For these, it works well to add a Long Term Memory section to your AGENTS.md. Be careful to prune this list frequently. Often your agent will want to add items here that provide no value for future situations.

It’s easy for your AGENTS.md to become polluted with information related to specific workflows or tasks (for example, running automated tests or accessing GitLab issues). Agent Skills are a much better fit for this. Whenever possible, move non-essential information from AGENTS.md to your skill files.

In general, be very strict about what is in your AGENTS.md file. It takes up valuable context space and biases every coding agent session. Be sure that your guidelines provide positive benefits with useful guardrails rather than redundant or outdated documentation.

Just Is Just Great

We work on software projects across a variety of programming languages and build systems. Some of them provide an easy way to run development commands (for example, npm run), and others do not.

You need some way to answer the questions “How do I set this up?” and “How do I run this?”. You need some form of documentation of common commands.

It’s easy to write this documentation once and then forget to update it as your project grows.

We needed a system that we can use on any project, no matter the language or build system. Something that would keep the development commands and related documentation in sync.

just is just great because it solves several problems at once while keeping things simple. You only need one simple file and one simple command. just is a command runner that reads all your common project commands (called recipes) from a single file. You can also add explanation comments above each recipe and just will automatically include them in the help output.

For any of our active projects, a new developer can run just to get a list of the available recipes. Our core set of recipes includes:

  • default - to list recipes
  • deps - to install dependencies
  • setup - to run all required commands to set up a local development environment

just automatically runs the first recipe of your file if you run just without any arguments. We take advantage of this by always putting the default recipe at the top, and have it list the available recipes with just --list.

Many of our projects also have a test recipe for running the automated test suite. You’ll also often see at least one of dev, up, run-mobile, or run-docs for running the given project in the local development environment. We use lint and format recipes to trigger the correct linter and formatter for the project’s programming language.

Here is a minimal example that demonstrates all of our core recipes, along with a dev recipe. This is actually the current justfile from our docs-template repository.

# List available recipes
default:
@just --list
# Install dependencies
deps:
npm i
# Set up development environment
setup: deps
pre-commit install
# Run in development environment
dev:
npm run dev

The comments above each recipe are automatically included in the output of just --list:

Terminal window
Available recipes:
default # List available recipes
deps # Install dependencies
dev # Run in development environment
setup # Set up development environment

Why we’re adding a justfile to each of our active projects

Section titled “Why we’re adding a justfile to each of our active projects”

Having a justfile in every active project means that new developers can get started quickly in a project they’ve never worked in before. The “muscle memory” of just to list available recipes and just setup to set up the local environment makes things very convenient.

We can onboard developers to projects quicker, and we can spend more time focused on writing code rather than internal developer documentation.

2025 in Review

In 2024 and early 2025, we faced the departure of four well-regarded colleagues. It was a lot to absorb, and we started out 2025 with a good amount of uncertainty. There were fewer engineers per project and less review bandwidth to go around. We also lost a lot of accumulated expertise. How would we establish ourselves moving forward?

We did not want to compromise on the quality or consistency of our output, so we focused on leverage. For us, that meant using standardization and automated tooling to amplify our work. We also started an exciting open source initiative, which acted as a big morale boost.

a smooth and consistent path is easier to walk than a rough and chaotic one

Standardization was a big focus of 2025. Each project did project management differently, and there was no clarity for what to use for new projects. We also had a generally low bar for documentation across most of our projects.

In 2025, we made great progress in standardizing our project management practices. We have developed a set of best practices through trial-and-error across different projects the past few years, but they had only been shared through ad-hoc communication between engineers. It was time to document the best practices and apply them consistently, so every active project could benefit from a more efficient workflow. This improved our workflow on existing projects and made it much easier to get started with new projects. Before, there was uncertainty on how to configure each setting for a new project, leading to decision fatigue and wasted time. Now we have a pre-set package of settings we have discussed, tested, and know work well for us, allowing us to configure a new repo within minutes.

Of course, every project has its own unique constraints, requirements, and team. We don’t have a rigid system that we treat as law. Instead, we have developed a set of general project management best practices including things like: how to label issues, how to merge changes, how to handle branching, and how to handle deployments.

To make getting a new repository off the ground very easy, we created internal checklists for both GitLab and GitHub. These cover things like: branch protection rules, repository settings that we change from the defaults, and setting up integrations like the code review bot. We also created a couple helpful repository templates for our GitHub projects. One focused on docs repositories - docs-template, and one for any new open source repository - basic-template.

We open sourced the issue-bot, our internal tool for helping us manage issues by reminding us of things like missing labels or incorrectly formatted issue titles. Currently, it only is available for GitLab, but in the future we would like to add support for GitHub. We are also interested in adding support for GitLab Statuses (replacing our current practice of using scoped status labels) and finding interesting use cases for GitLab Custom Fields.

Going forward, I’m excited to experiment with using the Specture System for some of our projects. Specture is based on some basic strategies I’ve found to work well with AI agents after some experimentation in the past few months. The basic idea is a spec-driven approach, where designs are documented in the git repo in simple markdown files, rather than scattered across GitLab or GitHub issue descriptions and comments. It seems like it will work best for projects with very small teams and a lot of AI agent usage, which is a good match for many of our projects.

Another area of focus for us in standardization was in raising the bar with our documentation practices. We embraced the Astro framework early on. We found it to be a great way to write documentation in Markdown and quickly deploy a static site.

Our current strategy is to have a docs directory in each new software project’s repository for storing the Astro docs for that project. We’ve found that it works really well to have the documentation for a project in the same repository as the code. The goal is that every time a pull request includes an important update or new feature, that same pull request would include the corresponding documentation update.

We also created GitHub repositories dedicated to documentation for several of our hardware products (for example, neuraplex.dev and mconn.mrs-electronics.dev). Our docs-template repo serves as the starting point for these repositories.

We deployed mrs-electronics.dev as the home for our public-facing developer content. We use subdomains for the docs of our different projects, for example: qt.mrs-electronics.dev and mconn.mrs-electronics.dev.

In 2025, we made a lot of good progress in standardizing and establishing our documentation practices within the software development team. However, we still have a lot of work to do. Establishing documentation is one thing, but keeping it up to date is another. We also hope to share some of the things we’ve learned about writing and deploying good documentation with others at MRS outside our team.

shortens feedback loops, so developers move faster with confidence

From the start, we saw the need to scale our team’s capabilities. Automated tooling is a powerful tool for leverage. We can automate the tedious and time-consuming tasks so we can focus on creative and high-impact work.

The following three sections cover three different feedback loops commonly encountered in software development: integration, local development, and implementation. We’ve found interesting ways to apply automated tooling in all three loops to shorten the cycle for each.

It’s important to reduce the feedback loop for the integration cycle. You can have developers producing all kinds of great code, but if you don’t have a good system for reviewing and merging new code quickly and efficiently things will quickly get backed up.

Early in the year, we did some team training on Docker and containerization. This discussion led to much more widespread usage of Docker containers and CI/CD pipelines across our projects. Our CI/CD pipelines protect us from all kinds of mistakes. We run linters, formatting checks, and automated test suites on every commit to most of our active projects.

Another place that CI/CD can have a big impact in is deployment processes. Our web projects and docs sites have automated pipelines for every commit to main, and we have pipelines that automate the time-consuming process of building APK and AAB files for each new release in our Android projects.

We also started work on our Code Review Bot. This runs on each new pull request for most of our projects. It allows us to shorten the code review feedback loop - a human review might not be available for several hours or days, but the code review bot can give basic feedback within a few minutes. It’s not perfect, and we have a lot of ideas for how to improve it in 2026, but it caught many silly mistakes for us in 2025.

High quality tooling for local development is essential for rapid iteration. We don’t want to have to rely on manual human checks for all our work. It is much quicker to have automated tooling that can check our work before each commit and push.

One essential piece that we’ve begun introducing to all our projects is just. It allows us to have a self-documenting place for configuring all the common commands for a project. This is very useful for enabling new developers to get started quickly with a project - they list the just recipes and find what they need.

Most of our projects have a lint recipe and format recipe in their justfile. These basic tools are essential for developing consistent code as a team. There is no reason to argue about code formatting - just use what your formatter produces.

It is also very convenient if developers don’t have to remember to run the linter and formatter themselves. We have found the pre-commit framework invaluable for configuring Git hooks. It can run the linter and formatter on staged files for every new commit, and also check for things like trailing whitespace and unwanted large files.

As a learning experiment, we started using Go for a few projects (time-tracker and mrs-sdk-manager being two notable examples). We found the superb built-in tooling to be a breath of fresh air compared to what we are used to in older languages like C++ or Python. Go has a built-in formatter, testing framework, and package manager, which makes it very easy for an inexperienced developer to get started with new projects without getting bogged down in a complicated ecosystem.

And now we get to the inner loop of software development. How do you take an idea from your brain to code? The past year or two has seen the rise of a brand-new way to convert ideas to code much faster - LLM-powered coding agents.

We’ve found coding agents to be helpful in many ways. They allow us to quickly prototype new ideas and explore new possibilities. Tedious refactors and writing boilerplate or glue code takes much less time. They also can be a great help in debugging tricky errors.

A great side benefit of embracing coding agents is that they thrive off of many of the same things as human developers - high quality documents, standardized development tooling, and good test coverage. When we invest in these things to help reduce the number of mistakes our agents make, we also make life better for ourselves. There is no excuse to have poor test coverage when a coding agent can quickly write you a bunch of test cases.

We tried out several different coding agents, adapting as new and better tools hit the market. Our first experiments were with Aider. It was a great introduction to having an agent with direct access to your local filesystem, but it was a bit tedious to have to manually introduce each new file to the agent. OpenCode was our next tool of choice. It is a great open source TUI for coding with LLMs. Tools like grep and bash commands really streamline the experience as compared to Aider. Amp is our current favorite. It likes to go through tokens quickly, but their ad-supported free mode allows a generous $10 of access per day. The main drawback is that it is proprietary and relies on their cloud servers, but it provides nice extras like shareable threads and workspaces. The main reason we like it is that it just seems to work. Amp seems to take the least amount of trial and error to get decent results.

OpenRouter was invaluable throughout the year. It provides an easy and effective way to access any model we want, based on the needs at hand. We used it for Aider, OpenCode, and our Code Review Bot. I like to think of the overall integration of LLMs through OpenRouter and coding agents like Aider or OpenCode as similar to a brain and a body. We can switch out the brain (the model requested from OpenRouter) based on what we need for the current task - a more expensive model like Claude Sonnet for a more challenging problem, and a cheaper model like Gemini Flash for simpler tasks. We can also switch out the body (the coding agent/LLM interface) as required - maybe OpenCode for implementing a new feature, and our Code Review Bot for reviewing the code.

It has been interesting to see how our coding habits adjust as we acclimate to using code agents regularly, and I’m sure we will continue to see lots of big improvements in the space in 2026, which will require further adjustments. One thing we have found very useful is to have a good AGENTS.md file in each repository. This is a good place to store LLM “memories”. After the coding agent makes a mistake, have it record the correct method of doing things in AGENTS.md. Most tools, including OpenCode and Amp, will automatically load AGENTS.md and this helps tune your agents in the correct way to operate on your projects.

a shared place for fixes and features

In 2025, we made our first steps into publishing open source software. Our open source projects provide us with a variety of benefits, including being a better way to distribute common shared code, and a good creative outlet for our developers.

Our biggest and most ambitious open source project so far is the MRS Qt SDK. We envision this to be the first of many SDKs, each focused on a different language and/or framework. The Qt SDK is targeted at the immediate need of our developers and our customers to have a solid foundation for new Qt applications for our embedded Linux hardware.

Our previous solution for shared code across different Qt projects was copy-and-paste between various repositories. This was far from ideal. Bug fixes and features would get introduced in one repository and never make it to other repositories. A centralized SDK should give us a single source of truth for Qt code optimized for working with our hardware. We can have a central place for applying bug fixes and new features, and then developers both internal and external can pull those improvements into their applications. It is a great way to reduce duplicate code across projects, and our improvements can have a larger impact as they multiply across projects.

We also started work on several helpful open source tools for improving our efficiency in our day-to-day work.

time-tracker is a simple app written in Go that provides both a CLI and TUI for quickly recording time entries. This is very useful for tracking the time we spend across all our different projects. We hope to introduce a web interface soon, which should make the app accessible from even more places.

bots is a collection of CI/CD tooling that we use across many of our projects. It currently consists of an Issue Management Bot and a Code Review Bot, and we plan to add more bots in the future to automate other parts of our software development process. Like the Qt SDK, the bots codebase is based on code that we had developed and copied-and-pasted across several projects. Having it put together in a central place with automatically built Docker images makes it much easier to maintain and distribute across projects. The bots are a great way to reduce time spent on tedious or time-consuming project management tasks, allowing our team to focus on high-impact work.

2025 was a year of big adjustments. We bore the grief of departing team members and faced uncertain prospects. We had to find creative ways to leverage the time and effort of our team to make an outsized impact. Standardization, automation, and shared open source codebases all helped to improve the effectiveness of our team, reduce inconsistencies between our projects, and shorten feedback loops. It was an exciting year of growth, and we look forward to finding more ways to continually improve our work in 2026!

Lessons from 2024

2024 was quite the year for learning new things! It wasn’t very comfortable, but I definitely learned a lot.

but can produce great value when done well

The learning of this lesson took up the first eight months of the year for me. I was given most of the responsibility for merging the Spoke.Zone and Lenticul dev teams into one combined Spoke.Zone dev team. It was something I dreaded quite a bit at first, but with time I began to see the benefits. I’m really glad I went through this process. I think it was a good opportunity to grow as a leader in a way that really stretched me.

The Lenticul team and Spoke.Zone teams had very different processes in place before the merge, especially in regard to the development life cycle and release management.

A few examples:

  • The Lenticul team used Jira for issue management, while the Spoke.Zone team used GitLab.
  • The Lenticul team had a dedicated QA tester, while the Spoke.Zone team had very underdeveloped testing practices.
  • The Lenticul team had a large amount of staging environments, while the Spoke.Zone team only had one.

In situations like these, it is easy to think that your status quo is the best, and that the incoming team members should adjust to your processes. However, this is not the best strategy. It is much better (though it takes much more work) to carefully consider all parts of your processes and develop new team processes that build on the strengths of the process of both of the teams.

Solid communication is vital to any team. For a while, the Lenticul and Spoke.Zone teams worked together but apart. We hadn’t defined the communication channels very well, so we continued to operate as two independent teams. This created a lot of unnecessary delay and misunderstandings. Having shared communication channels is vital for integrating two teams into one team. Without good communication, a team will struggle to make forward progress.

My goal for the Spoke.Zone team has always been to develop an autonomous team, one that doesn’t require a lot of direct supervision. This requires a lot of trust among the team members. Building this trust takes quite a while. It is easy to get frustrated with colleagues that don’t seem to be working towards the team’s goals. In situations like these, it is important to do a self-evaluation. Do you have unreasonable expectations? Have the goals been communicated well? Have you done your best to communicate one-on-one with your colleague to resolve the misunderstandings?

A merger of two teams takes a lot of effort, and you don’t usually see the benefits right away.

It took me some time to realize some of these, but once I did it made all the work we put into the merger worth it.

This one is pretty simple. The more team members you have, the more combined experience you will have on the team. This means it is more likely that one of people on the team will have some experience with a new technology you are introducing to the team, allowing them to take the lead in implementing the improvements.

None of us are a complete and perfect package. We all have gaps in our skill set and knowledge base. The more team members you have, the easier it becomes to fill in each other’s gaps. This does require good communication to discover and resolve those gaps.

We all come from different backgrounds and have experiences with different things. It is possible for different perspectives to create conflicts. However, on a healthy team these diverse perspectives can be used to refine ideas, allowing you to develop much better designs.

We make our living by meeting our customers’ needs

Section titled “We make our living by meeting our customers’ needs”

not by producing engineering perfection

I am a serial perfectionist. It is quite easy for me to fall down the dangerous rabbit hole of perfecting and refining a feature than nobody ends up using. In 2024, I got the blessing of several deadlines that prevented me from falling down this rabbit hole. I also got the chance to interact more directly with some of our customers. This gave me the opportunity to hear things from their perspective, to learn about the problems they need solved. One of my goals going forward is to be much more intentional about seeking out the customers’ needs and prioritizing accordingly.

A perfect solution for the wrong problem is useless to the customer.

We should never take our colleagues for granted

Section titled “We should never take our colleagues for granted”

we might not work together next week

This was the most painful lesson to learn. Within the space of about four months, we lost three engineers from our software team. Also in the middle of that time, the Spoke.Zone team lost two engineers that were employed by our contractor, Supportronics. It really made me realize that often job satisfaction has less to do with what you are working on, and rather who you are working with.

It’s very easy to take your colleagues for granted, to get this mental image that they will just always be there. Of course, we know this isn’t true, but I realized that I often went about with this kind of mindset.

It is important to take time to appreciate your colleagues. Let them know that you value their work and the impact they have had on your life and career. They won’t be around for ever.

Growth doesn’t really come without some pain, but I do hope 2025 is a year of a bit less pain. Losing close colleagues can be a pretty painful thing, however it is also a chance to learn and re-evaluate. One thing is for certain, we can’t read the future! Hopefully our software team can grow stronger in 2025 with the lessons we learned in 2024!