Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a person who has studied AI a little, I can say that there is a situation called hallucination for AI. This happens when AI mixes fiction with reality since AI cannot tell the difference between good and bad, so just as much as it is helpful, it can give us information that it thinks is correct. However, it is not correct.
youtube AI Harm Incident 2025-12-28T06:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwyLFcu86jJuv3K_u14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz5FjtZ-Grmd4_r1_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxvbrCuaOPwfIze8Gl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxaokaRZuICS3BGCxR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwK5f-b-bqGgj48B814AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},{"id":"ytc_Ugw2j8f3bXhZLmYQhTl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgxknD21ok5IJ04ZbYp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwUYYxOvRPXi2p7Ek54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwDsnvnbVcK1AdoPON4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugxo_xfirLKKM-m92Sh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})