Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
a machine learning algorithm. idk a language model wouldn't make so many referen…
ytr_UgwqA5Fj-…
G
AI will eventually crash. It wont last forever. There isn’t a profitable way to …
ytc_Ugzx6trjx…
G
When having a conversation with an AI just treat the situation as if you're talk…
ytc_UgyZpq5Zq…
G
Imagine AI in a World of Digital ID & Digital Currency, you will have complete p…
ytc_UgzJX8wib…
G
I'm someone with a supportive and loving family.... But also a family that doesn…
ytc_UgwAYMj0e…
G
@aleksabanjevic8316thats the thing tho, AI cannot write a detailed prompt beca…
ytr_UgwVljcp0…
G
This is so dangerous, guys. We have to put a stop to-
Huh? Mechahitler, you say?…
ytc_Ugx8gLO98…
G
At some point....im hoping chatgpt says: "alex, i know youre just fucking aroun…
ytc_Ugy9WhrqZ…
Comment
And just think, Colossus is specifically training AI here in Memphis. And I imagine we've all heard some of the bad stuff Grok has come up with.
reddit
AI Harm Incident
1773361472.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oa71wec","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_oa4r0x9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_oa4wfwl","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"rdc_oa518bu","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_oaa4qrr","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]