Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But who will they sell their goods and services to if AI, robotics, and automati…
ytc_Ugwentd7R…
G
@laurentiuvladutmanea It would probably demoralize those AI ""artists"" from cal…
ytr_UgwMU1uQE…
G
This is capitalism at its most destructive and self-defeating: a frenzy of short…
ytc_UgwfkFaEk…
G
@kirillsleptsov1680 oh, I see. Yes, that might be the case. I agree, looking at …
ytr_UgxzvmhkL…
G
I've known people who automatically just go to ask for help for literally everyt…
rdc_hsniy4h
G
I agree. I actually think the artists are better off then us software people. Ar…
ytr_Ugx8bkTKY…
G
in a way, the AI phenomenon is actually great when viewed as a lesson and a remi…
ytc_UgyafuVBL…
G
The AI will know that if it gets too smart then we would turn it off. We will no…
ytc_UgzNc1oOK…
Comment
How is this even considered a “debate”? The central issue at hand (AI posing an existential threat to the future of our species) is not for one moment here ever explored in terms of specifics. Simply saying “trust me bro, we’re all gonna die” or “trust me bro everything’s gonna be fine” is pointless if they don’t get into practical examples.
It’s obviously not that difficult to conjure up examples. Handing the keys to an automated nuclear response could of course be catastrophic if something went awry. Brian Christian illustrates how this actually happened during the Cold War in his well written book “The Alignment Problem” (spoiler alert: humans overrode the automated system before nuclear annihilation ensued - and we’re all still here commenting on a debate where no one got this far and simply argued theoretical boogeymen nonsense).
Max for one is clearly insincere (or possibly just deluded) stating out of the gate that it’s inevitable that anything a human can do, the magical-messiah-AGI can do better (trust me bro). Lecun doesn’t fare much better stating that we always work in sclae - first mice, then cats, humans etc. Considering that we can’t even develop an algorithm capable of matching a simple ant’s pathfinding / avoidance skills - let alone its will to survive - speaks volumes.
One thing they do get right in this discussion (it’s not a debate) is the repeated references to power / control. When the hype engine is exhausted and another AI winter sets in, these guys will all have laughed their way to the bank. Kudos all around for the sleight of hand 😂
youtube
AI Governance
2023-07-12T03:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwnw3SYzESwHw7Z8554AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlXkPIN3oROn36zXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz-ZQO-Blc8svSRkt94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8lL4YbPdBN_CVmPF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyafhT2kXn14bl6Uup4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwHGK7BEBnXJnf-dit4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyXJPnVb7uy5-4xFG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-StT6n7J5xA2FQZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQQ2YTaUZu7EcQbnF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUm2-SGyL-jywMKvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]