Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It makes me sad when AI users saying "what's the big deal we can do whatever we …
ytc_UgxJ3Qh7E…
G
The more that AI takes over life, the more I truly believe that the world at lar…
ytc_Ugxzgk15Q…
G
So let me think about this. If AI is going to take the jobs of White collar jobs…
ytc_UgzMaETT8…
G
it seems to me that it is programmed to lie, it is programmed to do the lip serv…
ytc_UgwKeQ6mi…
G
What do you mean this scenario might seem farfetched? It's already here! The AI …
ytc_UgxXVQr7P…
G
What I basically mean, while many comments by AI bros are nonsense, some comment…
ytr_UgxtBQmjx…
G
@PrograError true... I have no idea. maybe just gonna have it work as it is now…
ytr_UgyNWvv5U…
G
The ai would not be given data about a person’s race at all in its data sets tho…
ytc_UgzpM3wBz…
Comment
That's not super intelligence. That has had a definition for a long time. Super Intelligence only requires that the AI is capable of understanding itself, and making self-changes that improve it. In theory, that would result in an AI that is better at everything than humans that doesn't require a physical form. At least until it manages to start building itself robot bodies, then it just quickly becomes better at everything.
youtube
AI Moral Status
2026-01-05T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwo1P8kisYu_1IAwe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAERHzdC0QhPBUAPd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzU38CVeCSuHrUQ_jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMflifZsFXoXafBa54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyXfHEwu88GP9Htddp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHHAIpRBNdQfiV78d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnuNPn12og6DD9ZMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbhKyWyRViJUoFgwF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz9nrrKluo20eoRQxp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwTCO29C3Xm7_404-V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}
]