Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a professional cybersecurity specialist, and one who's been somewhat forced t…
ytc_UgzdLssxo…
G
I agree. AI will never catch on. It's the next Betamax. No worries. Chill out, g…
ytr_Ugy55mIq3…
G
It pisses me off that idiots misuse ChatGPT and blame it for their own stupidity…
ytc_Ugz3dw0mT…
G
@evilmelastday Then learn paitience. AI is not the replacement. You're also try…
ytr_UgwL0Qs2o…
G
We got people like Musk and Trump in charge so there is ZZZZZERO chance we avoid…
ytc_Ugx9G9jN6…
G
Just prompt to chat GPT: ''You are my job retraining instructor: please prepare …
ytc_Ugy32gynL…
G
Well, I don't understand why every single source talking about the threat from A…
ytc_Ugyv9xWt6…
G
@drone_ultrakill If we talk about "AI" as a whole, there are ads for some phones…
ytr_UgyRMJuPF…
Comment
Context, he jailbroke the AI, and the AI was replying as if it was a fictional story.
This is like saying a car killed someome because they drove it off a cliff. So hey, lets sue the car maker and now we can't drive cars!
Hurray what a benefit to fucking humanity.
reddit
AI Harm Incident
1756220248.0
♥ 208
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nas8uw5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_nas2pmo","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_natz30g","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_natwvdy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_narwpwb","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]