Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Microsoft of course! I’m sure Satya “Dumb Ugly Bald Square-faced Prick” Nadella …
ytr_Ugw7CZz0Z…
G
I was saying exactly that right before you said it, if all this was true then th…
ytc_UgxIbEKEy…
G
The assumption is that AI will be benevolent. AI WILL determine that humanity, …
ytc_Ugyr5v1QD…
G
9:19 Non work use of ChatGPT now up at 73%, it is a leisure practise, the opposi…
ytc_UgxNV2DwY…
G
Thanks for the interesting background on it. I'd imagine that the difference on …
rdc_lamu7iv
G
AI is far more stupid than the worst human programmer in the world. I tried to p…
ytc_UgxViQsQk…
G
49:17 it has the vilnetabiligy of being surfounded and having a holographic …
ytc_Ugw4YPJ2I…
G
Worth noting this "study" was commissioned by Writer, an enterprise AI company t…
rdc_ofipd85
Comment
"The facts on the ground changed." No, Dean, you were told this was coming. Not everyone was surprised. There was no fundamental barrier to creating AI with system 2 thinking, and there is no fundamental barrier to creating fully autonomous AI that poses an existential threat. Nothing about these systems or the field guarantees that everything will be fine or that it will take decades.
The exact same theory work that correctly predicted the specific misaligned behaviors of modern AI agents also predicts that it will destroy the world if it becomes much more capable than humans.
We can just stop it, and we have to just stop it. We should minimize the cost of that grave decision all we can, but there is no other decision to make that we are remotely likely to survive.
youtube
2025-11-20T22:2…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxhev4NGxygLF8oZMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDuXnyJgV4hXBoO4B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJ_Utk815mESSL_xd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxf2ysfrcjwOnYW4F54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmcFJw3kKLiERMxy54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxL0H9m8rS1m5QivgV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyFFyGyzTCs1cqrp2N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMyuR03RQrhVBnhxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcR4GjDOFwp7z_kMd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzG-M5F4kw2zM21MZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]