Have I Finally Moved Away From Visual Studio?
Dev Leader Weekly 134
TL; DR:
Copilot CLI finally pulled me from Visual Studio
Cross-repo AI customization still feels like an unsolved problem
Roslyn analyzers are surprisingly effective AI guardrails
Join me for the live stream (or watch the recording) on Monday, April 7 at 7:00 PM Pacific!
Have I Finally Moved Away From Visual Studio?
I have been a Visual Studio person my entire career. Not VS Code -- the “real” Visual Studio. The classic one. The big, beautiful, opinionated IDE that .NET developers have lived in for decades. I’m a UI guy. I like a visual git tree. I like clicking things. That’s just how my brain works, and I’ve never apologized for it.
So it is genuinely strange for me to say: I’m barely opening Visual Studio anymore.
You can check out my full thoughts on this in the video below:
Why Copilot CLI Clicked For Me
Over the past year I’ve tried a lot of things -- VS Code, Cursor, Claude Desktop, GitHub Copilot with cloud agents. And I want to be really clear here: I’m not saying Copilot CLI is better than the others, or that what worked for me will work for you. What I can say is that Copilot CLI is the tool that got me glued to it right now, and I think I’ve finally figured out why.
The cloud agent workflow I was using heavily before let me queue up work and check on it whenever. Truly async. That was a game-changer for parallel productivity. But sitting and watching a local agent chew through things on my computer while I wait? That pattern never really clicked for me. Copilot CLI hit a different sweet spot -- it brought the productivity I wanted back to the terminal context in a way that finally matched how I think.
Honestly, it might just be timing. The tool has matured, the integrations around it have improved, and it caught me at the right moment. That’s a real thing. Don’t dismiss it.
If you’re curious how Copilot as a platform compares to other frameworks like Semantic Kernel for building your own AI applications in .NET, I went deep on that comparison recently -- check out GitHub Copilot SDK vs Semantic Kernel: When to Use Each in C#.
The Cross-Repo Problem Doesn’t Feel Fully Solved Yet
Here’s where things get philosophically tricky for me. As I’ve leaned harder into Copilot CLI, I’ve been building up a decent library of skills, MCP servers, and custom agent definitions. And they’re useful -- genuinely useful. A skill that remembers how I like to structure tests, an MCP server that talks to a specific API, an AGENTS.md file with project conventions. All good things.
But I work across multiple repos. I have a social media scheduler, my blog, a dependency injection library, some MCP helpers, and a vibe-coded game framework I’m building purely for the learning experiment. These projects are completely different. And yet my guiding principles are almost entirely consistent across all of them -- I want composition over inheritance, I want certain naming conventions, I want a specific approach to async code.
Right now those cross-cutting principles live in project-specific files. That’s not right. I’m duplicating intent in every repo.
GitHub Copilot does support repository-wide custom instructions via .github/copilot-instructions.md, path-specific instruction files in .github/instructions/, and AGENTS.md files anywhere in the repo tree -- and VS Code is moving toward an agent plugin marketplace that can bundle customizations into installable plugins. That’s directionally right.
But right now it still feels clunky. There’s no clean answer for “here are my global developer principles that apply everywhere I work, regardless of repo.” I’m starting to move my reusable skills into a shared repository and pull them in as a plugin -- but we’re still early, and the tooling around this is evolving fast.
Actionable Tip: Start separating your Copilot customizations into two buckets right now: repo-specific (your architecture decisions, your test patterns, your naming conventions for this project) and cross-cutting (your general coding principles, your preferred patterns, your communication style with the AI). Even if the tooling to share the cross-cutting ones isn’t perfect yet, having the separation in your head will make it much easier to migrate when the ecosystem matures.
Build a Habit of Turning Wins Into Reusable Skills
One thing I’ve been deliberately trying to do: every time I finish a session that worked well with Copilot, I stop and ask myself -- “Am I going to want to do this again?”
If the answer is even maybe, I ask Copilot to turn the approach into a skill.
It’s not about capturing everything. Most things aren’t worth it. But when I grind through something painful and finally get to a clean result -- a refactoring approach that worked, an analysis workflow that surfaced useful insights, a code generation pattern that saved me hours -- that’s exactly the kind of thing I want to be able to invoke next time without rediscovering the process.
The pattern I ran into recently is a good example. I had Copilot attempt a sweeping test refactor across a codebase with some concurrency problems. It didn’t solve the problem -- I honestly didn’t expect it to. But by the end I had a clear list of what didn’t work, which let me identify the real root cause and design a more targeted incremental approach.
That incremental approach is something I want to formalize. When I go to do a sweeping refactor again, I want a skill that structures the approach: small changes, consistent reporting, flag outliers, don’t try to boil the ocean in one shot. I haven’t nailed that skill yet, but that’s the kind of meta-workflow capture I’m working toward.
Roslyn Analyzers Are Surprisingly Good AI Guardrails
Here’s the most genuinely useful thing I’ve discovered in this vibe coding experiment: when AI keeps making the same mistake, the right fix is a Roslyn analyzer, not more prompting.
The game framework project is intentionally hands-off for me -- I’m letting AI build the whole thing to learn about architectural drift and where guardrails matter. And it ran into two serial offenders that it could not stop doing on its own.
Mistake 1: DTOs with multiple constructors. The AI was building data transfer objects with three or four constructors. The JSON serializer would try to guess which one to use and fail -- every single time. Hours of spinning on this. Once I saw the pattern, I did two things: created a Roslyn analyzer that makes it a compile error to have a DTO with multiple constructors, then fixed all the existing violations. Problem did not come back.
Mistake 2: Wrong collection injection type. The AI kept debating with itself -- in the chat window, while thinking -- about whether it could use IEnumerable<T> for dependency injection or not. It can. But it would sometimes try IList<T>, sometimes IReadOnlyCollection<T>, and sometimes just get confused and do something broken. So I put a Roslyn analyzer in place: always use IReadOnlyCollection<T> for injected collections. Anything else is a compile error.
The pattern is: identify the class of mistake, make it a compile-time impossibility, then let Copilot continue knowing that failure mode is structurally ruled out. It’s not just about AI -- this is good software practice in general. The Microsoft Roslyn SDK makes writing these analyzers more approachable than you might think -- you can author a diagnostic that fires in the editor and a code fix that resolves it, all in C#.
For a deeper look at how source generators and Roslyn-based code generation work in practice, check out Real-World C# Source Generator Examples: ToString, Mapping, and Serialization and C# Source Generator Attributes: Generating Code with ForAttributeWithMetadataName -- same Roslyn SDK, different angle.
Actionable Tip: When an AI agent makes the same mistake three or more times in a session, that’s a signal. Don’t write a longer prompt. Write an analyzer. It permanently encodes the constraint in the codebase and removes the failure mode from the AI’s decision space entirely.
Managing Multiple Parallel Sessions Without Losing Your Mind
At any given time I have somewhere between three and eight Copilot sessions running across different projects. That’s not a flex -- it’s kind of a mess, honestly, and I’m not sure my brain is properly equipped for it long-term.
The context switching is real. Checking in on each session, figuring out where it is in its current task, deciding whether to redirect it, queue the next instruction, and move on to the next tab -- it’s a new kind of cognitive load. The productivity feels genuinely high. The quality is a more honest question.
The vibe-coded game framework is a perfect example of where that quality question bites you. I let it run far enough that it had the client and server so tightly coupled that when I asked it to split out a visual debug layer, it spent hours untangling something that should have been a clean separation from day one. Vibe coding will give you working code faster than you can imagine -- it will also give you architectural debt faster than you can imagine. That tradeoff is real and you need to be eyes-open about it.
For complex multi-agent patterns where you actually want structured, controlled AI orchestration -- not just tabs of chaos -- I wrote about exactly that recently: Build a Multi-Agent Analysis System with GitHub Copilot SDK in C# walks through building a real pipeline where agents have defined roles and don’t just wander off in random directions.
Actionable Tip: Before letting a session run for more than a few hours of iteration, define the intended layering of your code explicitly -- in a comment, in a skill, in AGENTS.md, somewhere. Once coupling creeps in across layers, the AI will replicate that coupling endlessly. The cost of fixing it later is far higher than the cost of naming it upfront.
Join me and other software engineers in the private Discord community!
Remember to check out my courses, including this awesome discounted bundle for C# developers:
As always, thanks so much for your support! I hope you enjoyed this issue, and I’ll see you next week.
Nick “Dev Leader” Cosentino
social@devleader.ca
Socials:
– Blog
– Dev Leader YouTube
– Follow on LinkedIn
– Dev Leader Instagram
P.S. If you enjoyed this newsletter, consider sharing it with your fellow developers!



