Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not that this wouldn't have it's own problems... But if we do reach the point of AI actually being a threat, but can't we just always turn the electricity off?
youtube AI Moral Status 2025-11-26T18:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwXyWQ1fGzQO5kWzf54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwT5M5V4BiZ7OD0H1p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyJM3gtK9ICVtDKYMl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyA98s2NJ_9HFLt2lZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTmSTkMS_KdDsxyfh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzsahasASp0jCODZQl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwdVWjE26tR-1x4EgF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw4Orw9CsMmWXUqQkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw84yFp-P5xbj_sK6N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxBuzSsdbYshHSXXCN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]