Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem nobody seems to get, even those in IT security don’t get, is what ha…
ytc_UgzHP8sBw…
G
Remember guys, the machine learning (pls stop calling it "AI", it's not) had fas…
ytc_Ugwp_tdaS…
G
People already know I'm crazy, haha. What they'd be surprised by is that Charact…
ytc_UgxvPrMcE…
G
@InoXtiC My friend. Models like stable diffusion take billions of images and squ…
ytr_UgyCHlK0F…
G
totally agree, ai really changes the game in customer service. on my side, i’ve …
ytc_Ugywv0moS…
G
love to see them try to replace truckers lol funny how people talk about it but …
ytc_UgynHzOPe…
G
13:04 I would like to point out that this video from shad comes from an earlier …
ytc_Ugza2iL4R…
G
It didn't take long to get humans replaced. Soon we will be labrats for ai dict…
ytc_UgxcpZodO…
Comment
But seriously not to have a fucking filter layer that filters out "openai" responses that mention it's fucking openai's responses?
reddit
AI Harm Incident
1702143305.0
♥ 584
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_hccfnp0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_hcba4pe","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_kcnge5n","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_kcnnis2","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_kcnhis2","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}
]