Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a marketer I hate this. I’m fine with AI in the campaign concept stage, but o…
rdc_nns603t
G
Yesterday I started chatting to Bard Ai and I asked the Ai if it is a sentient b…
ytc_Ugz9P_ZlL…
G
At the moment everyone is talking about AI as if it's one person, but currently …
ytc_UgyW0CKZp…
G
FIRST THING FIRST! Create an AI town with manufacturing, grocery store, other bu…
ytc_UgyONINtS…
G
Me poisoning random AI images and submitting them for scraping to poison the AI …
ytc_UgxANmDCA…
G
the problem with self driving cars is going around a bend at 60mph with no hands…
ytc_UgzbN9xG6…
G
every time I listen to Eliezer i'm convinced AI is harmless, Ezra tried really h…
ytc_UgxJfABiB…
G
As someone who uses AI art for fun (not profiting or anything), I can agree with…
ytc_UgwTaLKM-…
Comment
Did anyone else read the article?
Never heard of Timothy Geigner, but his writing is REALLY “readable, informative AND clear”.
Super refreshing to able to read something & actually gain information & perspective in the haze of the A.I. slop era.
reddit
AI Harm Incident
1769294166.0
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o1im78l","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_o1ifdqg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_o1jdwsj","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_o1jdu65","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_cjtfamk","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]