Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI does not help to do my job faster. Actually, it slows it down. Yet, it helps …
ytc_Ugx8B6bpZ…
G
As someone who draws for a living & a passion, I am deeply offended when someone…
ytc_Ugzi9AsfR…
G
@iliadin93Because they will be led by Al, AI will control them. Eventually we w…
ytr_UgwVS5S3w…
G
So when everyone becomes a plumber, the hourly rate will drop down to minimum wa…
ytc_Ugz4X6jTm…
G
You know that is a really great argument(not), but there actually is a differenc…
ytc_UgwoBs0Pr…
G
During armmegeddon a robot will be placed in Jerusalem ,in the temple and it's …
ytc_UgytWyEYC…
G
No, ai art is not real art. It steals art from ACTUAL artists to make said ‘art’…
ytr_UgyRY1y2l…
G
My friend: (we’re both big Murder Drones fans) I know a site where you can find …
ytc_Ugz-ytjRn…
Comment
If our human brains are intelligent, and our intelligence varies from person to person, then it follows that superintelligence is "possible". We have time constraints, interfacing/input bandwidth constraints, size constraints, energy constraints, etc. that do not apply to literally every imaginable class of cognitive system. That seems like a pointless question to ask.
As for "reasoning" models, they are kind of meaningless for "interpretability". They do help get more accurate answers sometimes by forcing the model to take a longer path between words/tokens that complies better with whatever was recognized as valid reasoning by people producing the training data, but that does not ever mean the AI is actually operating on that "train of thought". That is not even the right way to mentally model what the language model is doing.
youtube
AI Moral Status
2025-11-03T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzn4o4kur6Mq40hp8J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZYMYBBuYPpyMbEsx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugyydiy5p7thZnzDyLN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJkGKA1HuK8JYFSAx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw8bLDwfL6RPvTyPPV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxjXAlvQdUVYZMGGJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwaw-i9NShXvt0dDwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzbp_kNVUdGzWnwPA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwa4AHICQA_czmXW-N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxjc_l5VyqsVvq9kXh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]