Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think there's another level to be worried about. The conversation between Hank and Nate is about American AI companies. But even if by some miracle we convince all of them to slow down and back off from the race to superintelligence, I guarantee you that there are governments or other bad actors who are hoping that this will happen, because it will eliminate competition. Imagine if most countries and companies all sign a pledge or treaty to be responsible and take this nice and slow...only to be blindsided when Self-Interested Government Bent on World Domination™ unleashes a super-intelligent AI upon the world. When the "good" actors exercise restraint and the bad actors don't, the result cannot be anything *but* bad. And ultimately it becomes the classic arms race: even if the motivation is ideological rather than monetary, each side/ideology will itself as good and the other side is bad, and therefore is motivated to win the race because of the perceived consequences of letting the "bad guys" get there first. If we cannot solve arms races in general, I don't see how we can prevent this one. To flip the script and be an optimist for a moment, perhaps rather than being amoral, a superintelligence will spontaneously develop super-morality, and of its own volition help humanity fix everything. Maybe AI will manipulate us in a *good* way. /wishfulthinking
youtube AI Moral Status 2025-11-06T14:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzjrbpRdEv7Lp1rlWF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxhZGq48_hqIdEiDpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxDDnbTNlcCAGuodm94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyaaBTWD9UC41kLy-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxt0CQY6QpKAcWuWWd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw5HukF2RcMN0WpT2h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzovZCrU5xfW1sCc1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwxIFamg0dXdRLZxcF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzbvxVM09OXWW-Byvp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweNS0EGRRcIF0lVfB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]