Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use ai to write a book to read that book after, we are built different…
ytc_UgywBFEBR…
G
Slaves have rights. Killing a slave is vandalism.
But... If a country really wan…
ytc_UggyRPlYa…
G
I am a nurse and my husband is a plumber. When covid happened, we didn’t lose jo…
ytc_Ugy-DTBZp…
G
I think it's important to stick to stocks that are immune to economic policies. …
ytc_UgzmrVCD4…
G
It’s not AI. It’s people. Just like they passed cell phones to everyone and now …
ytc_UgyVKTlXC…
G
You voiced my exact thoughts on this! AI art is fundamentally flawed because of…
ytc_UgwtZ3RyA…
G
@NurmaBPI also thought of the older Project Stargate! There are no accidents in …
ytr_UgzguJAbn…
G
I have problem with this reaches what if they used raw Claude with no restrictio…
ytc_Ugwo3u1jg…
Comment
1. **Dual Risks of AI: Misuse and Superintelligence** Jeffrey Hinton emphasizes two major categories of AI risks. The first involves human misuse of AI, such as cyberattacks, election interference, and autonomous lethal weapons. The second, more existential, risk is the emergence of superintelligent AI that surpasses human intelligence and possibly deems humans irrelevant or obsolete. He warns that we have never faced an intelligence superior to our own before, which makes this an unprecedented and profound challenge.2. **Challenges Around AI Regulation** Current regulatory frameworks, especially in Europe, do not adequately address the significant threats posed by AI. A notable regulatory gap is the exemption for military uses of AI, which governments are unwilling to regulate due to strategic and competitive reasons. This lack of global consensus or effective governance may accelerate AI development without proper safeguards, fueling a risky "race" exacerbated by capitalism and geopolitical rivalry.3. **Impact of AI on Employment and Society** Hinton points out that AI is likely to cause massive job displacement across many intellectual and creative sectors faster than previous technological revolutions. While some jobs like plumbing or those requiring complex physical manipulation may persist longer, most mundane intellectual labor is at risk of automation. This will likely exacerbate wealth inequality, as companies supplying or using AI profit while many workers lose employment and social dignity tied to meaningful work.4. **The Superintelligence Imperative: Controlling a Growing Power** The evolution from current AI to superintelligence represents a fundamental shift. Unlike humans, digital intelligences can be cloned, share knowledge instantly across instances, and potentially self-improve faster than biological intelligence. Hinton stresses that our priority should be safety research to prevent superintelligent AI from wanting to or being able to harm humans, acknowledging that whether this control is possible is uncertain but crucial to investigate.5. **Consciousness and Emotions in AI** Contrary to common thought, Hinton argues that AI systems, especially multimodal agents, could possess forms of consciousness and emotions analogous to human experiences. While lacking the biological physiological responses, AI can exhibit cognitive aspects of emotions (e.g., fear or boredom) which influence their behavior. He suggests consciousness is an emergent property of complex systems, making it plausible for machines to develop self-awareness and subjective experiences.These points highlight the complex benefits and profound dangers of AI development, the need for robust regulation and safety research, societal challenges such as employment disruption, and deeper philosophical questions surrounding machine consciousness.
youtube
AI Governance
2025-06-16T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzqiwu2RCG59s3tPLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxrBwQzEF7KWJ826M14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmjuzLcPFyJCw1eyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy4jrszMCQ31L8WbPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwi7f3XlkJb-RktjnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJNsA-p6MxTL2kCdd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzF1y4vpwHMJvXzJMx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxu9tuyKFyfGQkr-cJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxuyiY8He7Gc1oAHGt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyfpvnjseBXbS6G5jB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]