Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing about LLMs that so many people either ignore or forget is that they are built to provide answers that could plausibly come from a person, not objective facts. And they do that too well.
youtube AI Harm Incident 2025-11-25T17:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzUun5RlN2ZPm6zaqR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugze31M3bq6QGnWknfB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz_R_GM_4EmBnCCS794AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwT6qJIHczk0mWXVLF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwFTAp_724pNK2ahsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyLDpsV3oY45imOJXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxQ7OqcZoZFRdI-dqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxfwwRatTXMl0yCmQp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEd38gvbJsy6ZG8Dd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxqkrYVZ-lztvRHo0R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]