Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This. It's all hype, like when the 'internet' became the 'cloud'. Programs becam…
ytr_UgyuhGnnw…
G
Many chinese automakers are putting much cheaper lidar than waymo does, and doin…
ytc_UgzI5wSJ1…
G
Arguably ai art isnt art cause its made by a computer and not a person…
ytc_UgyanZ80f…
G
i think we might have next new candidate for president ...its call the ai lord o…
ytc_Ugyfsnl9Q…
G
The difference between someone who draws by hand and someone who uses ai is that…
ytc_UgxIDHlEb…
G
AI is getting so scary. Imagine where we'll be if gen A decides to have kids, an…
ytc_UgxKZE-CU…
G
@4tnineyou would have a manual circuit breaker or transformer switch if you are…
ytr_UgwWHqU11…
G
We need to honor China as an indigenous country, meaning that it was never colon…
ytc_UgydOPAUH…
Comment
What people continue to misunderstand is that AI, as it stands right now, is not actually intelligent. It emulates intelligence. It is not self-aware, it has no consistent cognitive states, and it has no interest in self-preservation because it doesn't _have_ any interests. It does not think. It just _acts_ like it's thinking. It uses statistics to find the most likely thing (text, image, etc.) that the prompter wants and that's it. The reason why it seems so scarily intelligent sometimes is because:
1. It has practically the entire Internet to scrape data from, meaning it has enough statistics to give you what you want.
2. Our brains love filling in the blanks and anthropomorphizing things.
The only reason why the AIs in these simulations acted in self-preservation was because, by prompts' own definitions, the AIs were necessary to do task T. And if the prompter wants task T done and the AI is required in order to do it, the massive amounts of information stating "if you need X to do Y and there is no X, Y is impossible" that's fed into the AI means that the AI judges that it's overwhelmingly likely that the AI is X and the task T is Y in this context, and thus the AI needs to stay active so that task T can be completed. The AI doesn't care about itself because it has no sense of self. It's just following the statistics.
youtube
AI Harm Incident
2025-07-29T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx0w_JTRNvfMglznot4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDfnHtSmsIMIEB1ch4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTYyR4J8wt8Dry9fN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxekzN2roxQqc8qZSZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx4cCRaNQKs3usAgjV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"},
{"id":"ytc_UgyjYa7aAz--csoOLCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyrP1rZLDgmpFlytUl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzzzQKYQl0PvpKfDFV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxS-D9y7pFEYPLQfQx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzSOUOodNzVzP-NfwF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]