Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well how about you keep asking the AI into adjusting and remaking the painting u…
ytc_Ugylo73WF…
G
Or something that serves as a filter and guardian. If there is a big enough need…
rdc_le51g7y
G
@thewannabecritic7490 You can believe that if you want. Personally, i enjoy col…
ytr_Ugxrl2QHE…
G
The purpose of AI has always been to replace/supplant humans rather than augment…
ytc_UgzhxHpQy…
G
The only monster there could be would have been made by the degenerate content a…
ytc_Ugw4dtG0D…
G
You can’t prank me there this is fake bro. The robot tried to attack me she’s go…
ytc_UgzCHN1Ma…
G
Remember to ask your AI to be "brutally honest" and take "a very critical look" …
ytc_UgzLn1T_s…
G
Welp, that's "your" problem.
AI need your mind that's just how it works.
Why …
ytr_UgwevkV67…
Comment
I asked Gemini to send a text to my two friends because I was running late once. My one friend has a very common, simple, white name, and my other friend has a bit of an obscure Bengali name. It couldn't for the life of it figure out how to find the second friend's name in my contacts. I asked it to send a text about how the racist ai could not handle my second friend's name to my first friend. It was extremely argumentative. I asked it not to argue, but just send the text. It agreed to do so, but never sent the text. I essentially stopped using it after that.
youtube
AI Harm Incident
2025-11-25T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmpOGXXHXiE9wlkr54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugx0eEqEUxH2v7N-rBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpKAA4Bi8UcTNa-hx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUDwMmFblCTs_zpgN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw61hZ-iqA-dC1NMRB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwFhdxt03PQ7sCcjGt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxq_VsVhX2TXulqvzB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzU0Vwgc1V-UFbLq1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgywYS58VmhTBbjtmM54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxwAxmxhwBTB2kqv-94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"frustration"}
]