Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ha, yeah, and the flipside is I've had a couple of occasions where it has spat out some code, I've immediately looked at it and been absolutely certain that it isn't going to work, and that it has misinterpreted what I have asked, so I've gone back to it to try and clarify a couple of things, it apologises, rewrites it, I look at it and I can still see it won't work. After going round in circles for a little bit, eventually I think "fuck it, let's just see what happens and I'll fix it myself because I'm too damn lazy to start from scratch" and it turned out I was the dummy, because it got it exactly how I wanted first time. Yep, sorry for doubting you, my new overlord chatGPT.
reddit AI Harm Incident 1682948434.0 ♥ 234
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_degimx5","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_degn6lf","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_degnqfj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jif5qe6","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"rdc_jifhli1","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]