Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would add to this that the current “AI” is absolutely not delivering on its promises. Many companies have even gone so far as to make it mandatory for it to be used as part of the SDLC, and they are now having to deal with “AI slop” in their code bases. The LLMs can generate program code that kinda _looks_ good, but isn’t efficient or in some cases doesn’t work at all. As another example, the chances are close to 100% that if you are reading this, you have read a news article written entirely by software. This was already happening before LLMs, and it has become much more common now. You’ll notice small grammatical errors or just strange stylistic differences that set it apart from human writing. Again, it’s more slop. I don’t think I even need to go into image generation. It’s a bubble, and at some point it’s gonna pop.
reddit AI Moral Status 1746980155.0 ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mrs4nif","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_mrrp77i","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_mrru86q","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mrua774","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mrrbwau","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]