Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Something I find interesting is that there is this assumption baked in that we need to make AI safe in the first place. I wonder why there is such a belief that a superintelligent AI would be motivated to do harmful things that we need to protect ourselves against. That feels like a human projection onto a machine.
youtube AI Governance 2026-03-09T04:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw-ZL4FE7BBRMgngHZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx-094ipLcEr60KnKN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzGjf89ZOyph-cMWzN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfgbL3xXtIZdZ-eu14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNKsyfkMVH9UnQGqV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyrzL6GgCNeuBK-VwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWYOR2WR05hpVUFgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-6jxFficmec5OHzF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyClnt3akH8DPkfHMh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxvYr61US4qcF2UwRl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]