Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2016: real or fake
2020: real or cake
2025: real or AI
2026: robots or humans…
ytc_UgxB6DGmn…
G
The only reason I would use Ai would be because of toxic your community is just …
ytr_UgwPFAB6v…
G
Not really. UK firms already get lots of heat for their rates, and are transpare…
ytr_Ugxy3sV5q…
G
Tesla autopilot crash and not "reading /analyzing" situation correctly is expect…
ytc_UgywLKhoG…
G
Sometimes the AI doesn't know what tf it's doing so it just makes no sense. Like…
ytc_UgyWRPWI5…
G
The so-called activists doing politics... then we have to say it as it is, they …
ytc_Ugzwsh2G_…
G
AI is AI, its not magic its not stupid. However what is stupid, this argument. I…
ytc_Ugzxu1D1C…
G
Did this on Google Gemini, but subbed apple with peach. My first question was, a…
ytc_UgzjfwPAc…
Comment
True, but this is just an excuse to go and sensor everything. Deepfakes are not advanced enough to be dangerous yet. And even if they were, you still need voice actors who can exactly replicate the subject, which is hard to almost impossible. So yeah, this is just an excuse to go and censor shit
reddit
AI Harm Incident
1580892031.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_fgldeg7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_oi3uqve","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kk2yetk","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_kk3fd7f","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kk2p6ks","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]