Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We're modeling AI after humans and our behavior, humans are flawed, it stands to reason that what we create is equally flawed in it's own ways, so what would happen if we were to try and replicate our flaws into tools we use in flawed ways for flawed reasons.
youtube AI Harm Incident 2025-09-11T07:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxJV9hbZhJadOQNl-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzlMcTajKdR2YtXJfR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgydD-EbHQA-dR-u1Pt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw2YAif-E2WymI75_l4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxquFDrS3vRFaHoN7V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQm-PY8NijX8owfbp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxy9DxGDei1SaiLAIh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyULKKf9nANMqXJwnh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzdrZhT9uOTz9jEfjJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx5N_8asQTDsLZYhNB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"} ]