Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He claims, "Google Search didn't need regulation; it just made information avail…
ytc_Ugxz0406I…
G
The difference between AI and a Van Gogh painting is you can only buy one of the…
ytc_Ugx0EL7jw…
G
Personally, I don't really like the idea of poisoning ia art for the sake of poi…
ytc_Ugy5GzMnt…
G
There has never been a technology in history that destroyed more jobs than were …
ytc_Ugy9ZXdEf…
G
If you want to have an LLM talk about whether it's conscious, I suggest you use …
ytc_UgyE_cLrV…
G
A Megan adult robot? Next she’s gonna malfunction, do a little dance and try to …
ytc_UgwlKg2r3…
G
Sahar, the way you conduct this debate brings up several problems that weaken th…
ytc_UgzSK2m3-…
G
@ProductBasement Frankly, that seems like a silly assumption. And ignores other …
ytr_UgztvDQ9S…
Comment
Does anyone else lowkey wish they didn’t have to be living in the era when AI becomes prominent? The implications scare me ):
reddit
AI Harm Incident
1680530422.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jeu82x6","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_jes5yo4","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"rdc_jes8twv","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"rdc_jesc260","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"rdc_jet4hat","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]