Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great video, thanks! I've got a potential idea for how to gain public support. I'd love to discuss it with you over a video call sometime. a) Make people nauseated about (the future of) AI b) Do this by demonstrating clearly how AI will lead to a future deeply misaligned with the values almost all of us hold: humanistic values c) Use rhetoric we know from history. I'm no Marxist but where Marx said in his Communist Manifesto: "Workers of the world, unite!", we could say: "Humanists of the world, unite!", and rally public support for this issue in articles that are well enough written and informed enough to reach large newspapers. We should not be the warning scientist, but the rallying revolutionary-ish. A kind of revolution is in fact what we need. How is AI misaligned with humanistic values? We could write very briefly about the foundations of our western civilization as per professor James Hankins' "Golden thread": Greco-roman humanistic philosophy, Christian humanism, and modern humanism. AI will undermine what is common of all these forms of humanism: relevance of human agency, social communities, humanity's special place as "homo sapiens", and even all of our lives (using the ladder-approach to extinction risk). Further, we can briefly include the history of AI - modelled after the neurons in the human brain. In my experience this is the necessary ingredient for people to really accept that AI might just become superintelligent. A short intermezzo on Turing's paper on AI as a child brain to be trained is great for this. Thoughts? Love your stuff!
youtube AI Governance 2026-03-19T19:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxPwHJkg15uxSlZ_-14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWhwv3fJO0Vq-H9pJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugjjvpb0XuYeZ3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz39XjGzwRhz31xCTZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzyfqHA_p9CQCjXmTN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwG2HruOYH1vWgo11B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzWutfPx5sYUgdNV6d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyMtT-ctXROEiToofF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzOPA3Cez40yGskIyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwrHbQ-mRCWV2x0Ivl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"resignation"} ]