Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Our intelligence and that of AI is a difference of kind, I think, but I don't take comfort in that. That's kind of the point - I can trust that I could give most humans a goal and they'll at least try to pursue it without going off on a disastrous tangent. At one point Nate was talking about capabilities. If an AI system is developed that has the capability to seriously harm humanity, the architecture of that capability kind of won't matter. Whatever form that takes, whether very alien or very human, what matters is whether or not it will use that capability to seriously harm humanity. We're seeing signs now of intention in misalignment - not just that we didn't properly teach the AI what to care about, and so it carelessly did something damaging, but that they can be "aware" of their misalignment and take steps to pursue their own contrary goals. We can quibble over what "aware" means in that sentence, but in the meantime, if it "chooses" to kill someone, that person will be dead.
youtube AI Moral Status 2025-11-17T15:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzygqCafbRLsp9Xr194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugywgk6du9hbvl99LO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyl3QgrWOTtl6hKe3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzPh9ySYWWVptvVjrF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyyxM9y89cm6W4WC954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4E7InsIdi_3w7hNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwyn9yX1AMEJtOc7114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9JSmCZTyTbp2N4NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyfrNfhl5S1I770on14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwmRXUeGPtQkWYsN-p4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]