Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I SAW THAT "Erm, akshually, AI Creators deserve protection" like buddy I dunno h…
ytr_UgzhvdYKc…
G
The problem is that a lie is done with the intention to decieve. AI "lies" only …
ytc_Ugx7nm8zS…
G
Can anyone explain why AI wants to 'help' me with anything I do including make a…
ytc_UgxDWsHgO…
G
When it gets to the point that it takes millions of people’s jobs you’ll see a w…
ytc_Ugw7GCtjJ…
G
It's all well and good, if only the humanitarians use it but what if the worst w…
ytc_UgyRwSZ6G…
G
AI be more racist than actual people and it doesn't even have a race. Lets get i…
ytc_UgzSGuLm0…
G
This sounds less like an "AI stole my job" story and more of a "I trusted the wr…
ytc_UgzlzyJCE…
G
i felt the existential dread when AI art first came out. then i tried out Stable…
ytr_UgxG7Qm8a…
Comment
To me, solving alignment means the birth of Corporate-Slave-AGIs. And the weight of alignment will thus fall on the corporations themselves.
What I'm getting at is that if you align the AI but don't align the controller of the AI, it might as well not be aligned.
Sure the chance of human extinction goes down in the corporate-slave-agi route... But some fates can be worse than extinction...
reddit
AI Moral Status
1738005642.0
♥ 412
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_m9j33ec","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_m9i4odk","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"rdc_m9im9g4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_m9jphet","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"rdc_m9ihrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]