Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine doing something outrageous, and all you have to say is "it wasn't me, it…
ytc_UgyW52hI7…
G
Really funny chatGPT was talking about not harming innocent people, when it told…
ytc_UgzAEiFJ5…
G
I always treat chatgpt as human when conversing with it... i don't wanna sound r…
ytc_UgxoYWs5L…
G
If he was really trying to get clean , you wouldn’t have been asking an AI 🤖 abo…
ytr_UgxiqE2NT…
G
This nothing to waried about AI AGI now thats some to waried about who says A…
ytc_UgzhtMNOe…
G
i remember a couple years ago, problems with deepfakes showed up in my feed beca…
ytc_UgwD1M5Jm…
G
"Sorry, an error occurred while trying to prove that nazis were on to something …
ytc_Ugzco9cNP…
G
Ai can't create any job its logically impossible AI is made only to replace jobs…
ytr_Ugyj1-Sn5…
Comment
I think its also because people find it easier to believe ChatGPT is sentient. It's easier to talk to ai than it is to talk to a real human.
Some people do use ChatGPT as a therapist. Or as a friend to confide in, so its easy to anthropomorphize because you gain a connection.
reddit
AI Moral Status
1739922932.0
♥ 206
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdivxim","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mdli8x1","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"rdc_mdkro9t","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"rdc_mdjb5fh","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"rdc_mdjmwa9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"concerning -> outrage"}]