Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If what he says happens, the best thing for employment will be being an artist, …
ytc_UgzUcdvaV…
G
The 1970s movie "Colossus: the Forbin Project" would help these people see what …
ytc_UgyvYOWAE…
G
I see a lot of people in the comments saying that ai art isn't all that bad and …
ytc_UgzNujqF0…
G
A.I is no match for a thermal nuclar device...
Like trying to put out fire , …
ytc_Ugxa7cdZz…
G
My issue (as a natural child of both the art and the tech world) stems from the …
ytc_UgzNQFdOh…
G
I'm beginning to wonder how long it took them to realize that they'd actually cr…
ytc_Ugx2ZicTL…
G
AI could wipe out entry level and mid level white collar jobs. Robotics are abo…
ytc_Ugyg50cEN…
G
How about asking Germans if they would prefer AI or non-German human making deci…
rdc_gd7vctg
Comment
While it's easy to imagine romantic and idealistic scenarios that they ''sell'' us, where AGI transforms the world by delivering endless resources and radical abundance, personally I'm quite sceptical. Imo, given the backgrounds and priorities of the people currently driving its development, it’s unlikely that this will be AGI's primary focus. Hassabis and the interviewer fail to address a crucial question: do those responsible for AI (the various billionaire CEOs, big political leaders like Trump etc.) genuinely care about the social good, about the majority of average people, the environment etc. ? Or are they elitists who secretly harbor resentment towards society and the everyday, simple people of the 99% ? It's important to consider whether, as the gatekeepers of power and information, they prioritize their own interests over the well-being of the vast majority of "average" people.
youtube
2025-06-07T03:1…
♥ 241
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwGr3gJFmjAn3cM5fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwgMCVUt2G7xe2l8A54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcLY1zA-7BPxyhqp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLk3VDUn4wE1UhE2h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzdx7L0GyIjh0SrG_14AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMsSzdQr2BPE948ll4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxbiQVTSNlsvisN2SZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGy2yb7MN6WPrnAzN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxio3XMveQozMOs9rR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"skepticism"},
{"id":"ytc_Ugzg0_odCbW5QVGo56R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]