Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Regarding the question of human problem vs ai problem I think it's a bit of both, like you said. Of course AI can be improved in order to give more clear and distinct warnings, but to what level should that be neccessary, and how many people would end up still ignoring that warning and simply doing dumb stuff anyway? Additionally, at least for me personally, those warnings can also sometimes be a bit disruptive and annoying. When I'm having a conversation with AI about the feasibility of my ideas for some sci-fi esque body improvements through genetic engineering or surgical alteration, I don't need to read "I want to keep this safe and cannot offer medical advice" as the intro to almost every single response on such a very clearly hypothetical scenario that is far removed from modern-day real life medical capabilities. I'm simply trying to get a better understanding of whether my concepts make anatomical sense and whether there would be any sort of adverse biological reactions and stuff if we had the technological capability to actually do those kinds of things, it's not like I'm suddenly gonna start to cut myself open and try to physically move my organs around or change them in some way and I never even remotely phrased my messages in a way that would suggest such a thing. And, tbh, if I was that far gone that I would actually seriously consider doing crazy stuff like that, then I'm sure the question of whether AI had appropriate warning labels would be the least concerning one by far. XD
youtube AI Harm Incident 2026-01-27T10:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugw5YASHohiKdiLjBXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyXDXH_ZqzBi4WYS0t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgycjMwXaLdxxM-Wvvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDcezdnnVEG6XXTxV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZ_YBK4hGjqhfgRmJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyQ87qwujGg6nTAHrN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz3g7pign4H12AsWEB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwtoXaq3Z9kTTDLbVt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy8lcxerc2IceC7rSl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwwFqhJhZilwzQLVqF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"unclear"}]