Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I respect Geoffrey Hinton’s work and agree AI has serious risks, but I’m skeptical of “AI takeover” narratives. Neural networks don’t have desires or survival instincts — they only follow goals humans define, and “wanting” is just a metaphor for optimization, not intent. The real dangers are human misuse, poor design, and lack of safeguards, not machines spontaneously deciding they don’t need us.
youtube AI Governance 2025-08-13T08:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxhx-rrc8XRW1uTZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgznC9kqODnYBlFosmZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxr0yP_1PMtZy6Kr0d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxyV4eWVoyzCWW_F3F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy0TSMAhbUSOGTZcad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxGDrGmeSTvoIWXKFZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgygjgeyaUpW2j1RSvR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzIYoMnUjz_h25ZPF54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx2P2SnBfKI_UwCZr94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzN1zLyYKkUItW2bDV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]