Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am a fan of this video. Let me be perfectly clear, I have used a text to image…
ytc_Ugzqn6uYV…
G
I recently found out the cute image on my shower curtain was AI generated and no…
ytc_UgwAE-FUV…
G
A) hilarious
B) I don't know what I did to my version but ChatGPT likes to sass…
rdc_oa1dscr
G
AI robotics would replace human labour in factories by as early as 2030s, in whi…
ytc_UgxO4c1xE…
G
Then in the 80’s we had some cars with automatic shoulder belts on tracks. But y…
ytr_UgxvJuGKl…
G
The point that AI (or algorithms in general) does not know what it's doing is tr…
ytc_Ugyre2GAj…
G
The more incels deepfaked into man on man adult films, the more interesting this…
ytc_UgzGkSNhZ…
G
I dislike how fatalist and certain these AI guys are. Nothing can stop this, the…
ytc_Ugzt3XrwZ…
Comment
25% risk for humanity ending outright... or you could just not. Hrmmmmmm, seems like an unnecessary risk to me. What's the probability that AI will actually foster some kind of utopia and not a terrible dystopia? At the end of day it's just an ego thing for the elites who are chasing phantoms trying to find a ghost in the machine, hoping they don't open a portal to hell while inscribing a summoning circle around a bottle of alchemical reagents they barely understand. Sure, it could turn to gold, or you could release cyanide gas and create a chemical weapon ripe for the next world conflict. We already know how chemical weapons went last time, we know how nuclear research went... one of the first applications for AI will be warfare, and this could be the worst thing imaginable, if your strategies are devised by an AI that can lie to your military and political leaders so it meets it's objectives.
youtube
AI Moral Status
2025-11-03T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7OYJTYkLMcnJS1El4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzP8dnHSX0C0jdV95d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx2IHKvnKwsopuNSGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0IR0yRPYq0AjV92h4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwhyq8BAlC9kCXCLPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz02X5YR-W2s8L5n3B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwI2S10h1ntg512Or54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsTUMeQm1KDcKvcnh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6x9zZNnO2jRdBmI14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugygsx3SCUZ5Wk1hqJ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]