Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People have really wierd ideas about AI. If I said "I asked a random man what I should do about my medical problem" and then blindly followed whatever they said, you wouldn't bat an eye if the advice was terrible. For some reason, some people seem to think it's magically infallible, just because it's computerised.
youtube AI Harm Incident 2025-12-29T00:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwyLFcu86jJuv3K_u14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz5FjtZ-Grmd4_r1_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxvbrCuaOPwfIze8Gl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxaokaRZuICS3BGCxR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwK5f-b-bqGgj48B814AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},{"id":"ytc_Ugw2j8f3bXhZLmYQhTl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgxknD21ok5IJ04ZbYp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwUYYxOvRPXi2p7Ek54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwDsnvnbVcK1AdoPON4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugxo_xfirLKKM-m92Sh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})