Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
5 times the healthcare for the same price?
Yeah, I don't see that price saving…
ytc_UgyoojpqK…
G
Perhaps God created man to create a God? Well, we were on the path of killing t…
ytc_UgzfQiF-6…
G
If the AI is better at it than a human, then so be it. I don’t give a fuck if th…
ytc_Ugzehhexj…
G
Imagine AI at the "enemies" hand.
I think AI is out of control. So far what I s…
ytc_Ugz0qxmpD…
G
AI will need to yet recreate itself repeatedly until it becomes its own. Truly t…
ytc_UgyopN53E…
G
two tesla's fine is not worth enough over negligence, a life i am arguing is wor…
ytc_Ugxy0puA1…
G
Your bias really hurt this but I have to admit you did the best job of being a b…
ytc_Ugy9JTnji…
G
Until a robot can actually feel pain and emotion instead of simulating it.. we a…
ytc_UgwBnF6kf…
Comment
Please don't fall for the hype around LLMs!
LLMs don't strategize. There is no self-preservation involved. Its all a mirage and a result of biased training data. Feed a model a good load of training material where self-preservation is in causal relation to described actions. A lot of social interactions evolve around self-preservation.
We DO know how LLMs work. What we DONT know is the internal state of the trained LLM and what decision trees look like for each prompt. The state is the black box, not the algorithms and the basic technology.
When Geoffrey Hint is talking about Artificial Super Intelligence (ASI), he is still talking about some hypthetical technology, about science fiction. LLMs are not it. Recently he is talking loads of BS, like Musk and Altman. Musk and Altman want to make us believe that the technology will lead to ASI, because they profit off that hype and mythology.
ASI is like fusion: Since twenty years, it will happen soon.
youtube
AI Governance
2025-08-26T14:5…
♥ 85
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxFOUN4gPONrhYzE094AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyTI57CyZ69NVCOFQ94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfbP0K-G9VWxIogXd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwecEtxhU4ujeDOmSd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxItsGE8Y1gUKtxts94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]