Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why do we not see the obvious? I have noticed first if I said something it woul…
ytc_Ugy9izuDK…
G
I'm 99% sure I just saw those robots leave footprints in the sand, so either it'…
ytc_UgwR36JkD…
G
If this idiot thinks humans are smarter than AI then he'll know that firing nuke…
ytc_UgyX6aH7T…
G
People in the Middle Ages saw omens in the sky and swore by them. People in the …
ytc_UgzEMinfA…
G
Surly an AI will eventually find true statistics and come back to its "racist" c…
ytc_UgydBfz54…
G
Human beings stand around talking about how AI could destroy everything. Then st…
ytc_Ugzn1oVO8…
G
bullsh!t Ai, fake Tesla Autopilot/FSD slamming into 16 First Responder vehicles,…
ytc_UgxUBK_eI…
G
Yes, I agree with you but I’m mean babysitting that’s nuts and crazy and insane.…
ytr_UgyNDDvRj…
Comment
A thought I've had is that LLMs are predictive models. They take in information, analyze it, and try to figure out what the most likely outcome is, and feed that back to the user. These things are trained on nearly every bit of writing we have. And AI destroying humanity is one of these oldest and most popular tropes used by Science Fiction literature. So... wouldn't it make sense for the prediction engine to see that, "Oh, you keep writing about AI destroying you, or coming close to, so that's the logical conclusion" and following that logic? It's already been discussed that these things don't think. Don't know. It's all numbers and probability. It's just following the trends that have been set in place. At least, so long as people continue to misuse it.
youtube
AI Moral Status
2025-12-12T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyBAwCRBzS_XapFi5J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwv6JeVslLKcXO_K2R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxdomAxdbGGbvvrh4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaYssCSyX2smPfH0R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwip_VVgxx1MsCNq8h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_Du3fNSCIdcRzIh94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJSg7QXfpv21ExIhh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8GvEt7Gm0vQIbLsN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgywAWD1gWBab0iqb4V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKNAgH33nc0oqO8j54AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}
]