Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yudkowsky: “It’s not really about humans ‘getting it wrong’ at some critical point, because by that time the AI is operating on its own, making decisions that we can’t always explain and often can’t predict.” Klein: “I understand what you’re saying here. But there’s one point I’m not totally clear on: if AI did advance to the point at which it wanted to kill all humans, how could we have got it THAT WRONG?”
youtube AI Governance 2025-10-20T03:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgySKBOAjZloZe6pW5Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxVntyOVAu4MZMrAJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx6DoxeeBBDdDc_aGF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyH1S0uCeUqpw9tolt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyaIeWeiOUcfaz15C14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbUzIYeanHw25uTcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxmpET2uCBo1vVrZvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwtWZAKoEeZLcYdo6x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugx5jo7Qrce8u1UfNEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzgmtSHpBxIqNmxb0x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]