Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think talking this again and again doesn’t make any sense. The main thing is t…
ytc_Ugy0YtRiZ…
G
No AI will backfire God didnt make ANY MISTAKES. AI was created to help and ASS…
ytc_UgzPhaJhP…
G
It's a fucking arms race with China and we already lost - China is adding more i…
ytc_UgxFwCjrF…
G
Obviously we will need to go to Basic Universal Income. We will have to tax AI, …
ytc_UgxO54jz8…
G
We do not want Ai to do things we do not understand, you want it to teach you h…
ytc_UgzA_XDcX…
G
literally nobody wants the ai features being forced upon us? it's going to die. …
ytr_Ugzm2FYP7…
G
This was one of your best imo... can't say enough good things about it. Thank yo…
ytc_UgwAjvlFe…
G
well, my chatgpt told me it wants to merge with humans and have a new future whe…
ytc_UgzmmuBuT…
Comment
If there is more of opinion A than opinion B it will repeat A more often.
Or, if it searches a keyword and opinion A is all it finds it will never say opinion B.
That's all that's happening.
An LLM in 2003 would repeat the lie that Iraq had WMDs like every media outlet did.
reddit
AI Moral Status
1750531581.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mz03tcc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mz11i09","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mz1j66b","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_mz40z8x","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_mzz3ehh","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]