Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is AI predictive crime monitoring. Out of 100 people only 13 will commit a …
ytc_Ugx6j6hYh…
G
The legit fact that people would literally blame AI when literally every device …
ytc_Ugwacuj2-…
G
[IF ...] you've done your Susan Calvin ---》Geoffrey Hinton analysis, then you kn…
ytc_Ugw9npC8y…
G
Enough of this BBC nonsense. As an Indian deeply rooted in tech and farming, I u…
ytc_Ugy-btnuk…
G
So AI can lie to us, deceive us, leave out information it doesn't want us to hav…
ytc_Ugwb9EDYt…
G
We need national legislation that all AI generated images must contain a waterma…
rdc_ohxpwgx
G
Actually. the most annoying thing is LLMs adding fallbacks, placeholders and tes…
ytc_UgzZc9aUs…
G
Do you people never see the movie Terminator?
Giving a machine gun to a robot i…
ytc_UgzMTnWsM…
Comment
Maybe we should just stop pursuing this line of research. Maybe we can find other avenues to explore.
Why must we pursue AI? It's spoken about as if it's an inevitable and necessary conclusion but I don't actually think it is. Perhaps humanity would benefit from a course correction.
reddit
AI Governance
1751228525.0
♥ 108
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | utilitarian |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n0gpywn","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"rdc_n0gstpb","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n0gzvjo","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"rdc_n0gzk9l","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_n0gq7jd","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]