Coding with Copilot on Top of Application Infrastructure
AI coding works best on top of strong Application Infrastructure. With clear structure, strict boundaries, and consistent design rules, Copilot and AI Agents generate cleaner, more predictable code. Architecture guides the AI, not the other way around.
AI support in software development becomes truly effective when it operates inside a well-designed structure. When an AI Agent works on top of a solid Application Infrastructure, it benefits from the same clarity, consistency, and predictability that help developers move faster and make fewer mistakes.
A structured architecture removes ambiguity. It gives both humans and AI a clear model of what good code looks like. In my approach to Code Design, the Application Infrastructure defines strict boundaries and provides the building blocks that guide the implementation. These constraints make it easy to follow the intended Software Design and harder to introduce patterns that would break Clean Architecture principles.
Originally, this setup was meant to support developers. The goal was simple: let them focus on implementing features in the problem domain and avoid accidental complexity. But the same idea now applies to Copilot and Agentic AI as well. A strong architectural foundation doesn’t just help teams—it also helps AI generate better, more consistent code.
Application Infrastructure
A good Application Infrastructure isn’t about business features. It’s about the technical substrate that keeps the system healthy through clear abstractions, predictable behavior, and enforced boundaries.
By hiding external frameworks behind project-specific interfaces, the infrastructure shapes how feature code is written. This creates a high level of Structure and Consistency, which directly supports maintainability. It also establishes conventions that show “how things are done here.”
These conventions translate well into AI instructions. Copilot can follow rules, recognize patterns, and replicate examples with surprising accuracy—especially when the architectural boundaries are explicit.
Two main goals drive the design of the infrastructure:
- Establish a consistent structure that guides how features are implemented
- Hide complexity behind stable abstractions
Both goals help Copilot produce cleaner and more predictable code. A well-defined API gives it one correct way of completing a task. And a coding environment with fewer choices tends to reduce mistakes.
One rule remains critical: AI should not modify the infrastructure itself.
That foundation defines the architecture. It sets the boundaries, maintains Clean Architecture principles, and keeps the system stable. Copilot should operate on top of it, not change it.
Copilot for Feature Implementation
Feature code sits above the Application Infrastructure and depends on it. This is where Copilot shines. Most systems rely on a limited number of core use cases, and many features are just variations of those patterns. With clear constraints in place, an AI Agent can generate feature logic effectively and safely.
Why offload feature code to Copilot?
- Feature code changes more often than the infrastructure. Using AI here reduces the cost of change.
- Rich context improves accuracy. Copilot can use user stories, acceptance criteria, and examples to guide generation.
- The layering enforces separation. Even if Copilot writes imperfect code, other modules remain unaffected.
- Feature code is less critical. Reliability and cross-cutting concerns are already handled elsewhere.
- Each feature has a bounded context. Rules, abstractions, and local conventions prevent architectural drift.
This combination—strong boundaries and flexible feature logic—creates a productive environment where AI can accelerate development without compromising design quality.
Experiment on Workshop Labs
In my Application Infrastructure for Clean Architecture workshop, participants work through eight labs designed to show how the structure comes together. The same environment is a great testbed for AI.
The workshop repo includes foundational components such as AppBoot and DataAccess. They are production-ready building blocks that anyone can adopt, adapt, and test in their own projects.
For this experiment, I asked Copilot to complete the labs—or heavily assist in them.
The setup:
- Visual Studio 2026
- .NET 10
- Agent Mode enabled
- Models used: GPT-5 mini and Claude Sonet 4.5
- One shared
copilot-instructions.mdfile
Experiment Results
What Copilot handled well
- Understood and respected the dependency rules defined in the infrastructure
- Followed the structure and boundaries described in the instruction file
- Performed well on repetitive or verbose tasks, such as user interaction or logging
- Created new modules correctly when examples were available
- Applied Dependency Inversion and layering principles correctly
Where Copilot struggled
- Data access code required corrections—extra round trips or misuse of abstractions
- DTO generation occasionally hallucinated
- Did not improve or extend the infrastructure (as expected)
- Sometimes produced overly verbose code, easy to clean up but not ideal
Overall, Copilot performed best when the task lived inside clear boundaries. Predictability came from the architecture—not the model.
Details
Instruction File
The experiment started by generating a copilot-instructions.md file.
There are more advanced ways to approach this—such as model-specific variants with an index—but I used a simpler method to move fast.
I generated a first draft with ChatGPT-5.1, providing it with my training materials and pointing it to the workshop repo. It produced a solid outline that captured the core ideas of the infrastructure. I refined it further to match the Labs repo and the demo Application Infrastructure.
This file became the backbone of the experiment, guiding both models through the architectural rules and conventions.
Some sections to highlight:
1) Folder & Layering Rules (must follow)
repo-root/
├─ Infra/ # Application Infrastruture (Application Framework, DataAccess, Logging, Messaging etc)
│ ├─ AppBoot/ # dependency injection, modules composition, app startup, plugins dynamic load
│ ├─ AppBoot.UnitTests/ # Unit tests for AppBoot
│ ├─ DataAccess/ # Hides EF Core, IRepository and IUnitOfWork implementations
├─ Modules/ # Functionalities grouped by domain (Sales, Notifications, Export etc).
│ ├─ Contracts/ # Contracts shared between modules (e.g., Events, Messages, DTOs). No logic here!
│ ├─ Sales/ # Sales module (example)
│ │ ├─ Sales.Services/ # Use-cases implementations, domain services.
│ │ ├─ Sales.DataModel/ # [Example] Entities, DTOs mapped to DB tables. No logic here! (no if, while, logical expressions etc.). NO reference to EF Core!
│ │ ├─ Sales.DbContext/ # [Example] EF DbContext for Sales module
│ │ └─ Sales.Console/ # [Optional] Console UI commands specific to sales module.
│ └─ Notifications/ # Notifications module (example)
│ └─ Notifications.Services/ # Use-cases implementations, domain services.
└─ UI/ # User Interface layer / Clients
└─ ConsoleUi/ # Console application (CLI)
I think this helped a lot, because I didn’t get any misplaced files.
2) Dependency boundaries
- `Infra/*` → implements ports for DB, messaging, HTTP, files, dymamic load of plugins; registers via DI; no domain logic.
- `Modules/Contracts` → **no** references to anything. Only pure DTOs and interfaces. No logic
- `Modules/*` → **no** references to other modules. Only references to **Contracts** and **Infra**.
- `Modules/*/*.DataModel` → **no** logic; only entities/DTOs; no references to EF Core or other frameworks.
- `Modules/*/*.Services` → references **Contracts** and **DataModel**; NO references ot EF Core or other frameworks. Contains domain logic and use-cases.
- `UI/*` → references **Modules/Contracts** and **Infra**; NO references to **Modules/*/Services**. No domain logic.
> **Copilot:** If a change violates these rules, raise an error instead of making the change.
The cheaper models like “GPT-5 mini” did not raise any errors, but they still followed the rules.
The more advanced models like “Claude Sonet 4.5” refused to make changes that would violate them.
3) Registering in DI
- Use `ServiceAttribute` from `Infra/AppBoot` to register services in DI.
- The `ServiceAttribute` decorates the implementation class, specifying the service lifetime and the interface to register.
- Register only interfaces, not concrete classes.
- Example of the `PriceCalculator` class registered as the implementation of the `IPriceCalculator` interface:
[Service(typeof(IPriceCalculator), ServiceLifetime.Transient)]
class PriceCalculator : IPriceCalculator
{
public decimal CalculateTaxes(OrderRequest o, Customer c)
{
}
}
This made a big difference. The Copilot, even with the cheaper models, consistently registered services correctly.
5) AppBoot Plugin Model
- AppBoot supports dynamic loading of modules as plugins at runtime.
- In `Program.cs` where AppBoot is configured, use `.AddPlugin()` to specify the modules that should be loaded as plugins.
- Load as plugins all assemblies that are not referenced by any other assembly at compile time (i.e., have no incoming `ProjectReference` in the solution).
- Each call to `.AddPlugin()` creates a LoadContext isolated for that module assembly; dependent assemblies passed in the dependency array are loaded into the same LoadContext.
- `AddPlugin()` accepts modules names, which are built by convvention as `{ModuleName}.{AssemblySuffix}`.
- The `ModuleName` correspond to the folder name under `Modules/` (e.g., `Sales`, `Notifications`).
- The `AssemblySuffix` is the assembly name without the module name prefix (e.g., `Services`, `DbContext`).
- Assembly are named by convention as `{ModuleName}.{AssemblySuffix}` (e.g., `Sales.Services`, `Notifications.Services`, `Sales.DbContext`).
- When a module has dependent assemblies that are not referenced by the assembly that gives the plugin name, specify their names in the `.AddPlugin()` dependency parameter.
- Example: `.AddPlugin("Sales.Services", new[] { "Sales.DbContext" })` — each string is a simple module name (not a file path or DLL filename).
This was handled well by the advanced models, but not always by the cheaper ones. Still, this is part of setting up new projects, which wasn’t the focus of my experiment. And it’s easy enough to do manually.
6) Build for Dev/Debug
- Some plugin assemblies are not referenced by the host or by other projects. These assemblies are loaded dynamically at runtime and must be built for Dev/Debug.
- Ensure those assemblies are included in the Visual Studio build by adding them as build dependencies of the host or plugin root project using the __Project Build Dependencies__ feature in the solution.
- Steps: right‑click the solution → choose __Project Build Dependencies__ → select the dependent projects (for example, add plugin projects as dependencies of `UI/ConsoleUi` or the plugin root).
- The selection is saved in the `.sln` file and is not part of individual project files.
- If a plugin has additional assemblies that are not directly referenced, add those dependent projects as build dependencies of the plugin root project as well.
- Example: `Sales.DbContext` is a dependency of the `Sales.Services` plugin; add `Sales.DbContext` as a build dependency of the `Sales.Services` project so both are built in Debug.
This was not done by Copilot. For some reason it could not edit the Solution File (sln) to setup Build Project Dependencies, nor to add new projects.
Again, not a problem as this is more into setting up the structure, which was not the target of Copilot in my experiment anyhow.
Copilot did not handle this. It couldn’t edit the solution file to set up project build dependencies, nor could it add new projects.
Again, not a real issue, since this is about setting up the structure, which wasn’t the focus of the experiment anyway.
7) Data & Persistence
- `Infra/DataAcces` abstractions only, like `IRepository` or `IUnitOfWork`. Do not use directly EF Core. Do not take hard dependencies to EF Core.
- Use `IRepository` for read only cases; Get the `IRepository` via DI.
- Use `IUnitOfWork` for transactional operations; Get the `IUnitOfWork` via a factory function (`IRepository.CreateUnitOfWork`).
In general the Copilot didn’t do the best work in using the DataAccess. Simple, but impactful fixes were needed.
However, these instruction helped the more advanced models to get things right more often.
Maybe, I would refine this section more, and add some code examples, things will be better. Those samples are part of the course, so adding them here might help.
In general, Copilot didn’t do a great job with the DataAccess layer. It needed simple but important fixes.
The instructions did help the more advanced models get it right more often.
I might refine this section further and include a few code examples. Those samples are already part of the course, so adding them here would be easy and helpful.
8) Console UI
- Host project is `UI/ConsoleUi/`.
- Each module has its own subfolder under `Modules/` for console commands (e.g., `Modules/Sales/Console/`).
- The modules do not directly depend `UI/ConsoleUi/`; instead, commands implement interfaces defined in `Modules/Contracts/Console/`.
This helped a lot. All features accessed through the console (we’re building a CLI) were placed correctly in the corresponding Console project, which only depends on the Contracts.
The structure was followed consistently.
9) Files Copilot Must Not Modify
- Any file under `Infra/**`
- Any file under `*/DbContext`
- Any `*.csproj` file
> **Copilot:** If a change is requested in these paths, reply with an alternative that keeps generated/third-party code intact (e.g., partial class, extension method, adapter).
The advanced models took this very seriously. Claude refused to touch these files and explicitly pointed back to rule nine as the reason. It generated the code in the chat window instead and told me to apply it manually.
I appreciated that behavior.
The Labs
You can find the lab instructions in the repo at this path: .Practice/readme.md
The instructions are structured and clear. They’re written for developers attending the workshop, and they work well for an AI agent too.
My approach was simple. I prompted Copilot with the description of each lab, one at a time. At the second prompt, I added more details and clarified what needed to be done. Then I asked for a detailed plan. Only after that did I ask it to implement the code.
Afterwards, I reviewed the generated code and applied fixes where they were quick. In most cases, the fixes were straightforward.
Lab 1 - Notify IsAlive for the Sales Module
The first lab is straightforward. It asks you to replicate an existing mechanism in a new module.
Besides the written instructions, I also provided Copilot with the existing implementation to give it more context. I treated this as a small upfront investment to help it understand the pattern.
I used the cheap model for this one, and it handled it well.
Lab 2 - Refactor the Console Application
This lab required some restructuring. The goal was to use the infrastructure mechanisms to achieve low coupling between the UI and each feature implementation.
The cheap model struggled here. It didn’t respect the dependency rules and introduced incorrect references.
I had to refine the prompt several times and revert some of its changes. It didn’t feel like an efficient workflow.
When I switched to the advanced model, things improved a lot.
Copilot was very useful for building a clean CLI. This part involved some verbosity and user interaction code, and the model handled it well.
Lab 3 - Create a Composite Console Application
This lab focused on using the Composite pattern support built into the infrastructure.
The goal was to discover all implementations of the IConsoleCommand interface across all modules and build a CLI from them.
Copilot made this lab very efficient.
I only needed to break the lab into smaller steps, validate each step, and guide it in the right direction.
The only downside was the speed. It often took between thirty and seventy seconds to complete a task, which broke my workflow and focus.
One thing I started to appreciate about Agent Mode is that it builds the solution after each step. If the build fails, it tries to fix the issue on its own. In most cases, it solved it after one or two attempts. A solid plus for autonomy.
Still, the result was worth it. It generated a nicer CLI than the one I would have had the patience to write by hand.
Lab 4 - DataAccess in Sales Module
This lab focused on using the DataAccess component from the infrastructure.
Copilot handled the read-only part well, but it made several mistakes when adding or changing data. The code worked, but it was far from optimal. There were unnecessary round trips to the database and incorrect use of the IRepository and IUnitOfWork abstractions.
The fixes were easy, but they required a careful review.
Labs 5 and 6 - DataAccess Interceptors
These labs focused on using the interceptors provided by the DataAccess component.
Copilot had a good grasp of the concept and the structure. It created the right files, placed them in the correct locations, and set up the dependencies properly.
It was also very helpful with the repetitive work of making all the DTOs implement the IAuditable interface.
Overall, a great help.
Labs 7 and 8 - Adding new Modules
These last two labs are meant to test and recap the understanding of the structure created by the App Infra.
It asks to add new modules, which do DataAccess and have UI. Plus, it makes you use services from one module into another to prove the loose coupling between modules.
Again the Copilot was of a great help. I was just using the advance model and by now I also got a good sense on how to build the prompts more efficient.
I got into a workflow that I was reviewing files while others were generated. This helped not only with the speed, but aslo with not getting out of the flow.
It proved that it had a good understanding of the structure and the low coupling principles and of the dependency rules.
I appreciated summaries like:
Verify Solution Structure
After adding, your solution structure should look like:
AppInfraDemo
├── UI
│ └── ConsoleUi
├── Modules
│ ├── Contracts
│ ├── Sales
│ │ ├── Sales.DataModel
│ │ ├── Sales.DbContext
│ │ ├── Sales.Services
│ │ └── Sales.ConsoleCommands
│ ├── Notifications
│ │ └── Notifications.Services
│ ├── Export
│ │ ├── Export.DataModel
│ │ └── Export.Services
│ └── ProductsManagement ← NEW
│ ├── Products.DataModel ← NEW
│ ├── Products.DbContext ← NEW
│ ├── ProductsManagement.Services ← NEW
│ └── ProductsManagement.ConsoleCommands ← NEW
└── Infra
├── AppBoot
├── AppBoot.UnitTests
└── DataAccess
Or
Architecture Highlights:
Cross-Module Communication:
```
Sales.Services
↓ (depends on interface)
IPersonService (in Contracts)
↑ (implemented by)
PersonsManagement.Services
```These show that it has a good understanding of the structure and gives confidence that the changes will be right.
Conclusion
This experiment reinforced a key principle in Code Design: an AI Agent becomes effective when it operates inside a clear structure. When the architecture has well-defined boundaries, stable abstractions, and predictable patterns, Copilot can generate code that aligns with the design instead of fighting it. The Application Infrastructure provides the guardrails, and the AI simply follows them.
Strong conventions reduce ambiguity. Clean Architecture reduces decision noise. Together they create an environment where AI coding becomes reliable rather than accidental. The agent does not need to “understand” the whole system — it only needs to work within the rules. And when those rules are sharp and consistent, both the developer and the AI move faster with far fewer mistakes.
In short, structure amplifies the strengths of AI. A disciplined foundation makes Copilot more accurate, more predictable, and more useful. And that combination points toward a future where Agentic AI and good Software Design reinforce each other, rather than compete.
Drawing from our extensive project experience, we develop training programs that enhance predictability and reduce the cost of change in software projects.
We focus on building the habits that make developers adopt the industry best practices resulting in a flexible code design.