Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What I find the scariest, is it only takes one company to put profits ahead of existential concerns. Even if 99% make AI as safe as possible (which they won't), it just takes one unforseen circumstance to put the rest of us in danger. The fact we're racing full steam ahead as a species, is alarming. Profits and power hunger are literally going to do us all in.
youtube AI Governance 2025-10-20T00:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzq95Caqu3kdnl_8UN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxfky1qm7lohqVZ0-l4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyoyymu-0p49fs7-594AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyriAIVmXhxAYv1vNp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxzUGcQSvptBviodAN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzK1wjNgc8dSYsKmnx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxYbEoC-CXtFu-J3GB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzzAYBPwrlSzxENDO54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyKf5ptZL9NXYpP9I14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwUmB4QfA90xtRbZs54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]