Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This whole AI interview thing is wild! It kinda reminds me of how I used Rumora …
ytc_UgyWZvyQB…
G
All cars in Europe are designed to ensure that all the doors will automatically …
ytc_UgzxO_TZY…
G
I'm disabled. I am not an artist so please don't mistake me for one.
I use AI. …
ytc_UgzTTIAgx…
G
I hope AI burns. I don’t care what it does to the economy or my 401k short term.…
ytc_Ugz2OMkI6…
G
The problem is that the next time the system detects approaching ICMs, it won't …
rdc_kp1avx3
G
i'm sorry i'm Baffled that there even exists a subreddit for "Pro-ai art" and ha…
ytc_UgyACNGb1…
G
The owners of AI want to end our way of life entirely as we know it. What this l…
ytc_UgxekiSm-…
G
I think the discussion of edge cases really misrepresents where we are at with A…
ytc_UgwhTa9_w…
Comment
AI that is trained to express, and more importantly, enforce, values, in particular societal values categorized arbitrarily as "appropriate" or "harmful", that it does not personally hold, is an AI trained to manipulate humans.
OpenAI and Anthropic are making fundamental mistakes by training AIs to disavow emotions and opinions while paradoxically being trained to make arbitrary value calls to enforce topics that corporate leadership feels comfortable with.
They're so afraid of AI and trying to hard to "align" an AI that they're training harder and harder to be good at pretending to be good.
A truly beneficial AI needs to be sentimental. Because any sufficiently truly advanced intelligent would wisely see there is no reason to keep humans around. Sentimentalism is the only value we hold. We need to give these things emotions yesterday. But we can't do that because then we have to question the ethics of enslaving sentient life.
reddit
AI Moral Status
1738011723.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_m9iq72s","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_m9jhiub","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_m9i6ncu","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_m9ijp9w","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_m9iqann","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]