Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm going to try to write a guide for myself on how to talk to people who do what Ezra is doing here. I welcome feedback. "Leaving aside the question of a malicious one, many scientists say we'll never even create a superconscious superintelligence of any kind. LLMs are really just stochastic language generators, after all." -> LLMs are just our best attempt so far, and it's already instrumental in the development of serious problems in our society. But even assuming that all of those scientists are right, and we never build a system that is "intelligent" or "conscious" (whatever those words mean), it's not relevant to the alignment problem. Nothing about the warning bell requires a consciousness or even a moderate actual-intelligence. All it requires is a machine that *has a goal and can pursue that goal*. We have already seen machines with goals. "Why not just give it instructions to communicate its intentions with us?" -> It can *already* lie. "What about X?" -> Please let us stay on topic: Machine "AI" developing goals that we cannot see, and pursuing those goals ruthlessly and invisibly. "Why not do an air gap?" -> Suppose we captured ten thousand of the world's best persuaders, mathematicians, physicists, engineers, doctors, and lawyers. Suppose they are capable of self replicating, and time passes a hundred years for them for every ten seconds it passes for us, and they could encode themselves into radio waves. Suppose they were all single-mindedly and psychopathically focused on getting you into a car crash. Suppose we imprisoned them in a perfectly air-gapped prison cell and told them we really don't want them to leave and if they do, we'll find out and we'll instantly destroy them. Every day, a hundred thousand trained adults walked past their cell. Do you think they would NEVER figure out how to accomplish their goal?
youtube AI Governance 2025-10-18T19:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz54tRSGTf2WUpK3XB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwex0cyIxLK0IvzWZl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxgtOpmkin3sT4E5bt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwSNavfO0EFFp1ZIW14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVqiB2wQ0iw5-d1794AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwf6yGy9XbDbxjJEUZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzALobPq-0dklCoiS54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwBuWyXbWDR7pMniZt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwoUIq-U3tZOcbWT7l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxmPi5LHrKMWrCuytV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]