Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great and unsettling interview. I like this guy. What was unclear to me is this. AFAIK in order for something to "want" something / care about its existence (and thus care for itself, care for the truth, have feelings etc.) it has to be an embodied self-organizing system that is looking out for optimal conditions for survival. Since this is not the case with the kind of AI that Hinton is talking about, as he keeps emphasizing its digitalness, why does he think it has feelings or that it could aspire to maintain its own existence? By the way, I'm trying to present the argument of a former colleague of his, John Vervaeke, a cog sci professor at the University of Toronto. (Who, accordingly, is more worried about the undergoing experiments with embodied AI.)
youtube AI Governance 2025-10-24T23:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyUP1VgWgbw_upTO1V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxCrdbY8Pg_0C21QKR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzqiniEaX7Y88pXrSZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzN4QQiIK9r_T_0UY94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzdWx8DRlgTOBS2nDh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwh0OwA9wh9XQT_JNN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwnckRscvRaddL9u9R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz3wBu54bulem4psJx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy5w_z5rThkKpjnMOJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzmmSbEEqfvb6cWnAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]