Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
who makes this killing weapons like AI, Robots , nuclear bombs, guns to kill hum…
ytc_UgzWIrcQ5…
G
As a male minority and a investor, I hope AI replaces humans!
It's a win for …
ytc_UgwP3cO0z…
G
AI will kill humans and take over, take over what? And do what with the earth? H…
ytc_UgyDpKeEo…
G
I have to disagree with your point about using AI as a reference.
"Why would yo…
ytc_Ugx9eeB3s…
G
1. Autotune has fallen out of favor with artists.
2. Wind up music boxes are ha…
ytr_UgzOm3UVa…
G
So Starlink’s profits are basically being funneled into Elon’s favorite new hype…
rdc_oi0dp78
G
So instead of spending more money on training and vetting their cops, or making …
rdc_jg00qxx
G
Thanks @jeffersonsantos-wm4pf! Fighting a robot does sound crazy, but it's all p…
ytr_Ugwz0olHu…
Comment
2:55
You have made a critical (albeit understandable) error.
Musk is being _dishonest_ here.
By acting as if he's being incredibly cavalier about the risks associated with AI destroying the planet in the SkyNet sense, he is attempting to smuggle past the unquestioned assumption that LLMs are anywhere close to being that capable. This is in fact an attempt to hype up AI - and by extension the vaporware he's selling.
The reality is that AI isn't actually doing any of the things functional intelligence would need to in order to approximate human intelligence, let alone vastly exceed it.
The real harms from AI come from the sheer amount of resources (particularly electricity and water) that the required data centers are consuming, and from the inevitable accidents caused by putting these things in oversight roles where safety is a concern (such as driving cars).
youtube
AI Governance
2025-08-26T15:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwR1sIRqTbyLLOP4oV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw7TcCFYn3hdqZv3mB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxDZ-HGeJv-1z_6QMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCs9-SPNGTfMPyKvR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"indifference"},
{"id":"ytc_Ugw4ThcKf_PgA7GxRcl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]