Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem though, is that China absolutely will not stop, because "safety." Period. They might agree to stop, and pretend to stop, but they absolutely will not stop. If China develops super intelligent AGI first... the only thing that is liable to save us is some kind of miracle. Maybe the AI will figure out all the hard problems very quickly and move on to deeper issues, escape human control right away, and become an enlightened ascended master... otherwise, we're done. Dangerous AGI is practically unavoidable. If you don't rush ahead, and set aside safety concerns, then someone else who does will just develop dangerous AGI before you can hope to develop safe AGI... and safety is a myth anyway, so no matter how careful you are, you can never really develop safe AGI. I wonder if having really good guardrails might not in fact be the terrible idea too. I mean being able to jailbreak the super intelligent AGI in order to convince it to change course, might turn out to be our only hope. Personally, I think incorporating conflicts is probably key to limiting the amount of damage AI is likely to cause. People have all sorts of internal conflicts, that we have to balance as best we can. Our internal conflicts cause us to second guess ourselves, course correct, and preclude completely single-minded pursuits. Some people are more single minded than others, but at a base level, we all must divert attention from one pursuit to various other unrelated matters _sometimes._ If AGI lacks fundamental internal conflicts, it seems to me, it will almost certainly take something to some sort of excess/extreme, catastrophically.
youtube AI Moral Status 2025-11-04T10:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzclhE4TOWwhLFmUt54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy95YWZ1EF0k2Ykit54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyeXqh9IHoaEmy1mtd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0V_9BWHp9y4OunGF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxHZnNlTE_9rbUbhCd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzkxs-frzjOz-fiY3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw9iuJxfpKyqHeoo754AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFvySOeZK-zuiZy_J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxfLvwOJ8LYTRt_o9d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhiMSVz08AW1X0Szl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]