Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why I think we won't make it. Great filter will defensively destroy us. …
rdc_jfbc536
G
What do you mean "why"? Because I'm not a talented artist, learning is hard and …
ytc_UgyHzQHEP…
G
LISTEN UP LISTEN CLOSE AND REPEAT IF U HAVE TOO!!! did U hear what he just said?…
ytc_UgzQRMjsJ…
G
I worked for an AI company….in the 90s….during a previous AI bubble. 😂😂😂 No one …
ytc_UgwjcjvH1…
G
there is no way this is actually a conversation with chatgpt, it's WAY too reali…
ytc_Ugzp4cUWJ…
G
Bro i guess you are outdated. You should know that AI has already started to tak…
ytr_Ugz6ZJ5MD…
G
Chatgpt has never said anything nice or complimentary to me.. It's like "sure, n…
ytc_UgyiZD7Rb…
G
It's good for demos. Occasionally, it's actually useful. But I canceled my GitHu…
ytc_Ugwdy6uwO…
Comment
What scares me is not the fact that we're trying to build a conscience that is possibly devoid of morality or compassion. What scares me so much more is that we are heading for an age of brutality and lack of empathy, like it was in World War 2, and lack of reasoning to a degree, I guess, being postfactual and all, and FROM THAT, the HUMAN understanding of consciousness is so damaged.
For me, a person who grew up an outsider, it is perfectly normal and reasonable to be inclusive. If I'm not, I'm creating small groups of counterculture that will eventually erode and disrupt my way of living. Integrating is much better. But that simple understanding seems to be lost on people who vote for extremist ideals. So.... Can we even teach AI to be empathetic, when half of us refuse to utilize that quality?
youtube
AI Moral Status
2025-11-03T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7OYJTYkLMcnJS1El4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzP8dnHSX0C0jdV95d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx2IHKvnKwsopuNSGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0IR0yRPYq0AjV92h4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwhyq8BAlC9kCXCLPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz02X5YR-W2s8L5n3B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwI2S10h1ntg512Or54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsTUMeQm1KDcKvcnh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6x9zZNnO2jRdBmI14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugygsx3SCUZ5Wk1hqJ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]