Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A machine sent to disable a bomb, could easily have it’s memory and self copied …
ytc_UgwRMrZoI…
G
I would guess that every current available LLM in the world would also agree tha…
rdc_ntagyr1
G
It seems that Gödel's theorm effectively proves that alignment must fail once su…
ytc_Ugy62p-Ao…
G
AI definitely has nothing to do with demons. If we're going to go by that logic,…
ytr_UgxfXU34O…
G
Democracy will be dead in 2045!
The more things change, the more things stay the…
ytc_UgxeX-qWf…
G
If you support or defends AI, you shouldnt even be in society. Thats why i hate …
ytc_UgyRpFZKx…
G
In middle school, i was basically eating rocks, tf you mean middle schoolers can…
ytc_UgxZZHyHP…
G
and the hypocrisy is they ask ai to write about a specific topic to ask the anot…
ytr_UgyvVkten…
Comment
This assumes that AI adoption is a straight line. Every simulation is based on a basic premise of a downward trend. The board sees bigger profits, therefor they get rid of the staff. Eventually no one will be able to afford the product, so either the company drops the price and begins to lose profit, or they lose market share. This also does not question if AI could replace the board of directors? Always working from the base up, but what if AI worked from the top down, then what? What if the shareholders demanded that the board be replaced by an AI, then what??? Rather than be led by AI, we need to start thinking about replacing centralised AI with decentralised AI, one that aligns with the user and by reasoning also defends the user. These stories always portray a them against us narrative, which never continues indefinitely and always turns out totally different to the predictions. Look back at every revolution and you'll see the same stories that told us we were going to lose everything.
youtube
Viral AI Reaction
2025-11-25T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgztuFR0ZjQRBXuKzT14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugywlkb9RKbhpgiC-qp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRzL-5gim3irsYXTF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJJql9vTiZn2WnOhh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy7oyxoV2fiE4Np12d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw1MtMRBCZyM7wP-qB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxX4eoghgrSx5yIPTV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx0nrWoBAMMd14mphF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyuhSL1BT6DAW7fDZx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxgmWSaMaP780OlMOl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]