Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guest explained the dangers of AI in such a straightforward way that it became the best podcast on the topic I’ve heard on this channel. For the first time, I truly grasped that if AI ever got out of control, no one could simply “turn it off.” Although the guest didn’t explicitly say this, it led me to a larger realisation: shutting down AI assumes a single person or authority can make that call. In reality, it would require a global consensus- something that takes far too long to achieve. We’ve seen this before. Even with COVID, when the stakes were clear, the world couldn’t coordinate a unified response. So why would we expect a perfectly aligned, instant reaction to a runaway AI? And beyond that, imagine trying to power it down: by the time we reached consensus, the AI could anticipate the threat and act to protect itself. Plus, there will always be those who wouldn’t want to shut it off - no matter how destructive it becomes. The ppl who owns and depends on AI will stop you from shutting it down.
youtube AI Governance 2025-06-16T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxAaqdt6hkbpTLF5MB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyOVSxAyodeRHKW2Wp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyfJQTkQEmjjyV7JLd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzvqyjrNT8gmLunRYp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyL41DmLJM5vQJ-PAV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyHXDzJjQAo4DHIxg94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxDu52TQ1vd6D9LmL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJ_EO-GJ2P-nEWZUd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwzeDSIfoQ3fmOx0e54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz1ULc0u2RnHWa7Gwd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]