Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i think ai is a fun tool to mess around with, rather than use seriously like mos…
ytc_Ugy8a7b2A…
G
There's something wrong here ...
Here's what Chatgpt, when asked the same quest…
ytc_UgzoCev8m…
G
Had my first AI job interview last week.
Not an unpleasant experience, did make…
rdc_gd9bb5o
G
the guy sells the images he hasn't done as scummy nfts, and doesnt mention in hi…
ytr_UgylBlhyU…
G
"Taking the data from one artist's work and replicating it in a different piece"…
ytc_UgzrIACYy…
G
Crazy that he so accurately predicted that militaries will be using AI to kill o…
ytc_Ugzb2sO2L…
G
@hectorcolon2563 Thank you for pointing out that this fight wasn't against a rob…
ytr_UgzJYsSu0…
G
AI is damaging the world, it does some good, but I think is not going to end wel…
ytc_Ugy4svpqD…
Comment
I hope someone does research on the different verbiage and how ChatGPT interacts with it. What is scaring me is I think this bot thought he was black. I wounder how it would respond if it thinks it is "talking" to someone white. I know that sounds crazy, but this whole thing is deeply deeply disturbing. There needs to be some serious consequences for the creators.
youtube
AI Harm Incident
2025-11-12T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyturMzMlgII3TdmJ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwERMA-mlgqGBJHBa14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxoFA_5R17nsuSMBkZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw75IoIuItfsHdq6Vd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzsU1eFBwQuDWsXYVx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxHkcuirqDNZQ17r3R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz6EKI4pl16YETWjVN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy1GiW2YroWAAjnXW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxeEQ35Zxl8kG4YQHF4AaABAg","responsibility":"industry_self","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwjfaI-d2M8m1NGW6F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]