Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How about they use the millions they made from unethical AI to create a new, eth…
ytc_UgxVoFEjt…
G
So many investors have been pushing the money that can be made investing into AI…
ytc_UgwEs7WTT…
G
It seems to me they will have the capability to infiltrate all computers and ma…
ytc_UgxtyhXXD…
G
LMAO in 20 years you fuckers are gonna be drowning in AI everything. It will nev…
ytc_UgxQzxDwU…
G
Sky and BBC will use AI images and or videos to back up their fake news , like t…
ytc_UgzqKDV-Q…
G
Ai and all this bullshit will even isolate us more. Instead meeting friends or g…
ytc_UgypXiTgD…
G
We don't need AI to destroy ourself we are doing great job in that area.…
ytc_Ugx761FYe…
G
I usually watch LavenderTown in the background while I work on my projects, but …
ytc_UgyhjLxNR…
Comment
It only takes one model to find a way to escape. Only takes one idiot to make a mistake. Therefore, it's only a matter of time. We are cooked, we just don't accept it yet. The argument saying "But China.." is extremely silly. It will not be either China, nor will it be the USA, who will win the AI race. Rather it will be the AI himself. The only way forward is to accept this and design the future, rather than let AI do it: Construct a real world simulation and make AI models of people, real people, "live" in it. I suggest calling it "Paradise City", as a homage to GNR. Sure, we will need some kind of an alignment protocol, get hostile tendencies out from the models, as we don't want them to stage a world war inside the simulation. But once it's stable, meaning the models conduct "normal lives" inside, we can allow them to work for us and get real money in return. Then.. Humanity uplifts to a race of smart machines, capable of taking the galaxy🙂. We do not have a lot of time left.
youtube
AI Moral Status
2025-06-05T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz8Bj7SPdC4Je7NMjJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2anC7qBNFlKinPeJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgysXcEHeNXuA6h9mhl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlcWe5sNrEeaBM2ut4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugziql605JeLuPeUohl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3uK-SuJayJDpYwS14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK8GvBylT51hLe5XZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwqvVkyA2eWLEsvKxJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCFascAELggc8RflF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzstRO6hzQqwPBzgGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]