Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why do people gamble lives based on hyped promise of a narcissistic sociopathic …
ytc_UgzEMXWUp…
G
Giving my take on common stances:
Ai steals audience from artists: That's not t…
ytr_Ugy4xLtsz…
G
At this point the ppl who still think this isn't stealing were probably deaf and…
ytc_Ugxxt5LEk…
G
They know it's a problem. In fact, they've known for decades, but they've been …
rdc_d0f7sfv
G
Remember kids: AI harms the environment. If you use and support AI, you are hurt…
ytc_Ugw09Xz6q…
G
I’m less interested in humanoid robots, and more interested in consumer Star Tre…
ytc_UgxUeaMbC…
G
+Alan W
What do you think those automated container loader trucks they use to l…
ytr_UgjxmYopw…
G
Question: Is it ok to recolour art yourself rather than use AI? Course. I do ke…
ytc_UgyyDhvdO…
Comment
We’re told that super intelligent AI might wipe out humanity and that no one can explain exactly why. Hinton warns that AI may simply “decide it doesn’t need us,” yet explicitly says that it’s pointless to ask why and how. Instead, we’re told to prevent AI from wanting to harm us, even though we’re not allowed to ask why. This is circular reasoning. The argument assumes that AI poses a threat because it might want to harm us, then concludes we must prevent it from wanting to, but without providing a motive. The threat is used to justify itself. We're too dumb to understand the risk, but smart enough to act on it. The solution, we’re told, is to give more power, money, and control to the these other (better? non profit?) companies "invest in safety." Feedback loop, logical instability, and circular reasoning alert! Hinton's argument replaces clear cause-effect thinking with risk aversion built on assumed danger. If the threat is unknowable, and the cure is more trust in more big brother, an actual known untrustworthy group of clinical retards, then we’re not solving a problem we’re building a priesthood of tech vicars. The real danger is these Farquaads building and controlling the machine, the continued unchecked human systems that claim the exclusive right to build and contain it and ""protect" you. We need transparency and clear logic over fear-based speculation and emotional hijacking.
youtube
AI Governance
2025-06-16T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwuTScdHGo9-sfpOTd4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-PYBvwxAG7NWAE094AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUNDqaaYpKk3FlNPl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXw5W-9NNgn5qWx8l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzuh3sz307P9pheg3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3SEsrcbTj-h6cwVV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzn15DIJVFx7RMHn2B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBWqBfoRw6cWtF_al4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxultLJYC6ClUvNioV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyF3c6VlEnufMAJAbt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]