Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
14:36 There's no need to mandate such warnings. It's like demanding we put warnings on medical books because some people will misinterpret the information. This leads too easily into censorship, and there are many corrupt people who want to take AI away for themselves.
youtube AI Harm Incident 2025-12-07T00:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzoLDifIt3aG_H5fkR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx6qzQX67NVnFjaiFV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyQkdgazW2JmfA-pOh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2Op_dlIVfnjjbJt14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz6oDn-c9iudLgk7mp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4yDwbmnCF-rOHUEt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwe2GpEtQphzk5mWqR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEL8p2VjBFS8Wl3Kx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxmeuLX5hchAabtJRF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRhGSC9uJf9Y2W8NV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]