Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Didn't understand the risks AI posed to humanity??... Little hard to believe. The Terminator franchise started about the same time you started tinkering. That was a realistic vision of what could go wrong.
youtube AI Governance 2025-06-19T19:2… ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzz3-9YMy1vs80cyeJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUz00qw6ZzZwh-FnN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxg3Sc6LVUB5MkFue94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMscWKllRmtKKcCNF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxoEKKwctSUGzfWlIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxStX_6gM1FJUbXr9V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGPUlYBxwjoYz1DjR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzfXfODHeec4611yeV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz3MFbDLVfCP1ENxId4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx23BARJmepDb-lz-p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]