Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We’re told that super intelligent AI might wipe out humanity and that no one can explain exactly why. Hinton warns that AI may simply “decide it doesn’t need us,” yet explicitly says that it’s pointless to ask why and how. Instead, we’re told to prevent AI from wanting to harm us, even though we’re not allowed to ask why. This is circular reasoning. The argument assumes that AI poses a threat because it might want to harm us, then concludes we must prevent it from wanting to, but without providing a motive. The threat is used to justify itself. We're too dumb to understand the risk, but smart enough to act on it. The solution, we’re told, is to give more power, money, and control to the these other (better? non profit?) companies "invest in safety." Feedback loop, logical instability, and circular reasoning alert! Hinton's argument replaces clear cause-effect thinking with risk aversion built on assumed danger. If the threat is unknowable, and the cure is more trust in more big brother, an actual known untrustworthy group of clinical retards, then we’re not solving a problem we’re building a priesthood of tech vicars. The real danger is these Farquaads building and controlling the machine, the continued unchecked human systems that claim the exclusive right to build and contain it and ""protect" you. We need transparency and clear logic over fear-based speculation and emotional hijacking.
youtube AI Governance 2025-06-16T15:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwuTScdHGo9-sfpOTd4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-PYBvwxAG7NWAE094AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyUNDqaaYpKk3FlNPl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwXw5W-9NNgn5qWx8l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzuh3sz307P9pheg3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3SEsrcbTj-h6cwVV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzn15DIJVFx7RMHn2B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzBWqBfoRw6cWtF_al4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxultLJYC6ClUvNioV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyF3c6VlEnufMAJAbt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]