When You Give a Manager a Chatbot
Observations as an Engineer
LLMs are wonderful and wonderfully awful. In the right hands, they will speed up most “research” and “prototyping” tasks almost to an infinitesimally short period of time. In the wrong hands, however, a 30 second generation of throw-away code can lead to a multi-month, bug filled sprint toward the garbage bin.
With all of the hype and constant badgering we get from the AI companies to use this IDE, or this Browser Plugin, or this LLM, or this image gen, it was only a matter of time before middle management in Corporate America discovered its “usefulness”. From flailing art directors quickly mocking up logos that rip off well known brands, to engineering managers that can neither engineer nor manage using ChatGPT’s sycophantic responses to rearchitect entire platforms on a whim, LLMs have come for the sanity of every mild-mannered software developer whether we like it or not.
Which brings me to my manager.
What makes a bad manager?
Bad managers have trouble with trusting their employees. A lot of them in corporate America failed up into management from IC roles but didn’t realize it was actually a demotion for them. By ICs I don’t just mean code monkeys, but any type of contribution to the product pipeline (sales, business analysis, security, infrastructure, AR/AP, and of course engineering, but many more).
But let me now focus on engineering-specific managers, with whom I’m most familiar.
Managers who weren’t good engineers think all engineers are inherently bad engineers (or at least worse than them), and they must micromanage them to greatness, because they feel they are great engineers (why else would they have been promoted?). Particularly bad managers might boast about how great of engineers they were when they still wrote code, and how few bugs they ever produced (read: how few were caught while they still were ICs), and how they single handedly built everything you see (even though 15+ years of development has lead to an App of Theseus).
They also tend to use a lot of “I” statements when talking to customers or higher-ups when discussing good things, and “They” statements when things aren’t as rosy. “I built a new authentication scheme” and “They didn’t do it correctly, and now I have to do it for them.”
Bad managers hear things like “peer programming” or “code review” and think, “Engineer can watch me copy and paste code from StackOverflow,” and “I’ll spend 4 hours talking to the engineer about code that isn’t relevant to the PR they submitted, and then apologize because I’m too busy to finish the code review after wasting an entire afternoon.”
Bad managers manage badly.
The wonderment of LLMs for Managers
I remember when a particular manager learned about Claude. The year was: 3-months ago (Summer 2025). He said that his phone kept prompting him to do AI, and he kept dismissing it. I told him that was a smart thing to do. He then asked me if I used AI.
I was honest, I use it, and I use it a lot.
I run local LLMs on my decommissioned crypto miner turned Ollama server (with a couple other identities before - render server, unreal engine dev machine, and NAS). It boasts 2x3090FE + 2x3080ti, so plenty of VRAM for smaller models on the smaller cards, and splitting a bigger model across larger cards for slower, but better responses makes for a nice balanced local chatbot. I also have $100/mo Claude subscription, $60/mo for Cursor, and $120/mo for ChatGPT (though $20 of that is for an employee of mine).
He asked if he should pay for it, and I was honest with him again: “NO!”
I thought that was the end of it. Then a week later, I logged onto my work PC and had a barrage of Teams messages and emails with zip files, code snippets, and random longwinded paragraphs of word soup, from a person who didn’t usually type more than 4 words per message.
After reading through everything, I realized he had asked Claude to implement his idea for a customer’s feature request, and all the various emails were new versions of the code. Each of them was a completely different code base, because manager didn’t understand context windows, and that chats didn’t necessarily have knowledge of other chats. So when I finally tested the latest version, it didn’t work, and then when I tested a few of the other versions, they also were failing to compile due to made up references, namespaces, and classes that didn’t exist. It was BAD.
Manager didn’t even care that nothing was working, because of how fast it was generating the code. He just kept saying “try again” and eventually it worked… lol – not really, he ran out of free messages for the day.
Bad Manager meet Bad LLM
After a few weeks of him trying to do my job for me (but using Claude instead of thinking about the problem, and writing code to solve it), he then told me to use Claude since I pay for it already, and have “him” write the code. I tried to explain that was a bad idea since Claude didn’t know our code at all, and there would be no simple integration for such a complex request. He didn’t trust my caution, insisted on doing a “pair programming” session over Teams so he could watch and make sure that it was Claude writing the code instead of me, the consultant that is paid $150/hr to write code.
He watched my screen for hours a day, for weeks, as Claude failed to deliver any working code, and on top of that, the number of contradictions made both of our heads spin. Claude would add external packages, so I would have to constantly ask for it to remove external libraries and Claude would attempt to add that logic into the now multi-thousand line artifact that kept getting longer and worse. After literal weeks of him not letting me do my job and engineer the solution, we were on the verge of a deadline, and I had time off scheduled, so I took my 4 day weekend, and I wrote the 10 lines of code needed to implement the new feature at 5am on day 1 of my break.
My domain knowledge of the application, my understanding of the requirements, and my ability to actually program in “legacy” allowed me to finish and test my implementation in about an hour. The morning I returned from my vacation, I submitted a PR. He was very impressed by its simplicity. He told me he knew Claude would “get it eventually”.
I said, “Actually, I wrote that in around an hour Thursday morning.” to which, he asked me, “How do you know it works, then?”
“Because I tested it.” - Something Claude chat windows aren’t capable of doing.
And that’s when I realized that within a couple of short weeks, my manager had learned to trust Claude more than any human developer, even when Claude had yet to deliver a working solution to anything. He trusted 1000 lines of hallucinated code more than 10 lines of hand-written and unit-tested code. He has lost all trust in developers’ ability to code, because he knew he couldn’t do better, and as I stated at the start a bad manager thinks they are the best engineer because they got “promoted”.
Where do we go from here?
I’m not sure how many other developers have had this same experience, but I’m lost for how to continue. LLMs are wonderous if you know how to use them. Asking a chatbot to code something for your already written application without context that is more complicated than simple helper methods is the worst application for LLM-based coding.
I’ve contemplated trying to teach him about agents and agentic coding, but I’m scared... Not for my job, but my sanity.
At least for now, I can basically dismiss the code that is copy+pasted into a Teams chat at 11PM as “doesn’t work”. If he ever learns that Claude Code or Cursor can actually “learn” your codebase, and directly change files, I will just retire, because I will not be responsible for the code he generates and asks me to test for him.

