Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
«There's still a chance that we can figure out how to develop AI that won't want to take over from us. And because there's a chance, we should put enormous resources into trying to figure that out- because if we don't, it's going to take over.» Thanks, Geoffrey Hinton.❤🙏🏻
youtube AI Governance 2025-06-17T09:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyipi3ya0G6fGXMcyJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzivpKr8SDUGFSsm994AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxeWkCIxuYZvNMpyCh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTsAEOVjRdzIxD1cJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzLfgSfShWSZBA6Kd94AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxLTgHmxkJ70TxIPcx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxWUS1t2XDwbpDmLlp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyByPautIky0SHFEsx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_UUxpO8KMOYsUEX14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5eWV1wtmKXbByBR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]