Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I said the same thing to my managers in 2024. They were 100% inexperienced with code and they thought they were doing the right thing when they said they were going to spend millions on a project to let AI mimic an already-in-prod architecture I built. And it was problem free. I expressed doubt that AI that could give us code snippets, could also replicate (in python) the tons of Java and SQL code I wrote over 6 months, learn and replicate the fairly complex data mart (with 2 dozen type 2 dimensions and multiple strategically, almost cunningly smartly designed fact tables). I used code compression techniques to avoid writing straight logic and speed up machine performance. I expressed to my managers that AI cannot do that. I did not say that out of spite, but out of genuine concern for the amount of money they were ready to spend to try to replicate what was already working ridiculously well - just to prove a point to the upper management, while these middle managers driving and cheerleading this project were completely clueless. They said that I was insecure as a developer. I responded with “I am being honest, and if I think this could work, I’d be spearheading this effort instead”. They didn’t like that. They put me on a different team and went ahead with this boondoggle that had I gently (with good intentions) cautioned them not to embark on. Two years later, all three of those middle managers are gone. My code still runs on AWS - and the state of NY depends on my reliably written code, and more importantly, the architecture and design that made that code resilient, error free and dependable. There is no replacement for an experienced software architect. AI is definitely useful though - I get code snippets, new ideas to write code differently sometimes.
youtube AI Jobs 2026-02-09T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgySMLwDc6kfpNvNBvp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwpd-pScUSyqy6OA854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxexEzfpKgnFUiJ5t94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzcYP72NFEUwpBc4Et4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxC2HKgAdTAlK-5eYJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwU63e1ogv1W6efL994AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxPI_ovYZv5p9Ckm-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_FHDYRSDiC5nzNaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxBRyU1Hv8f7M_ja6R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0ZD62s7RnmWy1bMx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]