Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ugh ai honestly should have just left at the text thing this whole art thing is …
ytc_UgzzhYZHk…
G
Ok but how do we fix it AI doesn’t understand race or even know what it is it’s …
ytc_UgxK2bjj3…
G
It most likely would not care one way or the other, just like we have no ill wil…
rdc_jfa9sgi
G
Ok. Hire people to pick up the packages from the truck and ring your doorbell. B…
ytc_UghtWbmiw…
G
A.I. and an automated bio lab equipped with an artificial womb along with human …
ytc_UgySdXmkc…
G
Sad thing is. Ai could have made a better video about itself, you kinda suck dud…
ytc_UgzxFC8e0…
G
Unfortunately this hasn't been proven to work :(
If you take a picture from Goo…
ytc_UgxWZ4fmo…
G
But even then the biggest problem I 've seen about it not being discussed much i…
ytc_UgxH070jG…
Comment
PSA Regarding AI. You need to treat it like reddit, you are more likely to progress by stating something wrong and getting it to correct you.
If written docs say that the AI will do something dont go "The thing tells me to come to you". Just state "Now give me the application form".
reddit
AI Harm Incident
1751302712.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_n0mx3t7","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"rdc_n0o5cdi","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n0ofd6q","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},{"id":"rdc_n0pc14d","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"rdc_n0luls0","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"]}