Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Basic information technology science tells all of us that if AI says something y…
ytc_Ugzetj1DT…
G
So in most sci fi having a robot almost indistinguishable from human beings was …
ytc_UgxYQQ3rc…
G
I actually really like most things he thinks is scary. AI blablabla.. I rather l…
ytc_Ugz_GgRof…
G
which A.I. ?? earths manufactured A.I or the true dark earth A.I . Be precise Wh…
ytc_UgwRDSs3W…
G
Throughout history, every major invention — books, radio, television, music — wa…
ytc_Ugz1NvKjZ…
G
im starting to as well. i have had conversations with other language models, and…
ytr_UgzA4RKCW…
G
People are telling me to learn everything I can about AI prompts and to turn tha…
ytr_Ugx0FqUvI…
G
All those sci-fi movies were right AI will go rogue and try to take out humanity…
ytc_Ugw3Rmhzq…
Comment
There's no reason AI systems WON'T behave like all other intelligent/living systems, whereby sometimes it's evolutionarily (and game-theoretically) beneficial to go rogue. This isn't even a property of intelligence, it's almost a property of the underlying information landscape.
If you disagree, you owe a story that explains why AI's will be different than every other intelligent/living system that has ever existed.
youtube
2026-03-26T05:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1EdtJyWvtvqqcGvd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2jXt3QGvT9AoDaKl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaPUTlcoFwUyX5Q5t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUoM1-X-1LQAuSnTN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz5YNW1A-YgjUZbzLR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyU4DW73JbINRI0qW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwS8qojC6vMABuDxk54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQpTn1IqFlkpYN4RJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuiZUl6eTwDiphuwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzFaZVfFaQUyK61Qhp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]