Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI _is_ a human problem! More precisely: Human trust into this ai is a problem. Humans ignore the fact that these systems that are trained to act as perfect yesman machines on statistical steroids just to throw literally any source in a non-understood but pleasing form at you, because that's how they trained. Are you that blind to ubiquitous boot-licking that you find it normal that an artificial system's first response starts in full brown-nosing mode from the very first response on? "That is a good question" triggers an immediate "up yours" scepticism in me. It's a human problem, because we choose to become more and more uneducated and stupid, because of naive laziness. It's a human problem, because people ignore all inconsistencies of these companies. E.g., AI companies distribute their own internet browser, nowadats. But despite claiming to cide anything super-humanely better, they don't generate their own browser from dcratch to exclude human mistakes, which would be easy because everything is openly standardardised, but clone a solid browser's open source code (usually Chromium), and modify it a little bit. Thus, they do not trust their own claims - and they know why. I've waisted hours and hours to review dysfunctional code from gen x/y/z coders only to find out, that these lazy bastards vibe coded (used ai to generate code), amd were too lazy (&/ incompetent) to think about "their" ouevre, but trusted the generated code blindly. "That's hyperbolic, since nobody reports it" Really? Just think: a) "dog bites postman", or b) "postman bites dog". What stories are more likely to be published? Thus, postmen usually bit dogs very often, right? Call it "unconventionality bias", if you must. And since American are happily experiment medicate themselves bevause of the fantastic health system , and the free frontier spirit, this guys case is just the beginning. Good luck suing these companies...
youtube AI Harm Incident 2025-12-08T03:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugybx8NXGeGS23RmHIF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzQsy85rWwQj8pEjAR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJ8lcc3pd5-ZakQ_p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6ITmWVEkPa806_ul4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLNcjj9WYbuWADNOd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzgA2yIi5Dq8jzA0854AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBMOOhaT-Is4EzDQd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugx1w_2HDMztNbpKhSF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxvb0rW1r_tangFl-94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxP_BojK1hd-Z8yaZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]