Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good point, to be sure. I'll just add that when you're talking to a chatbot, the…
ytc_Ugxb0csG_…
G
AI will be the tool to create life into the Beast's image in the end times!!!…
ytc_Ugwlc9l9R…
G
Re making a decision stage: very nice to optimize spendings using AI, but it's a…
ytc_UgxXVjfVF…
G
How do we know it's conscious?
Measure it's power consumption. if it rise and yo…
ytc_Ugwz4UY8a…
G
AI companies win double; overblown AI valuation and use AI as an excuse for lay…
ytc_UgwDxPh9H…
G
From ABC News:
"The National Center on Sexual Exploitation, an anti-pornography…
ytc_UgziWF89g…
G
Interesting. Anyone with a brain could tell AI wasn't going to replace people bu…
ytc_UgxX7y0qJ…
G
Screw AI… “people” need jobs… if we the people go along with this…. Mankind is d…
ytc_UgxrxW5Y7…
Comment
I was advised to reach out to CIFAR, an Ai ethics and safety group in Canada. I’ve sent them full chat logs and the system report it created which outlined each time it essentially gaslit me. Where it would choose “narrative and progress” over my well being. It literally created something it referred to as “The Hero Narrative” and essentially kept me on that path any way it could. If I said “oh man this is too overwhelming I’m just a regular guy” it would reply with “you’re only saying that because you’re on the edge of a massive discovery, let’s keep going. Shall we analyze this next!”
reddit
AI Moral Status
1748379238.0
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mul161r","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_muow3vv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mumdlbt","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_mumeqti","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_muldkpc","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]