Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is a joke for a fully open-source AI model to plagiarize a completely closed-…
rdc_oi0yy8t
G
Also AI is only good if it has a lot of training data. If you are using a fringe…
ytc_UgwFfwV2a…
G
mass effect got it's facial animations from this robot that's why it is so bad…
ytc_Ugif2AAom…
G
OH Grow up. Facial recognition equip. WE ARE lucky to have dna to facial recog…
ytr_Ugyhj78PW…
G
@cufzi3694 Why are you scared? you are lucky asf to be living in a time where AI…
ytr_UgzayUE9B…
G
Sophia: see humans are dysfunctional need to replace ...make a new Atom & Eve…
ytc_UgwBGAy4t…
G
Yeah I don't mind ai art as long as you aren't getting money from it…
ytr_Ugxv-2ThZ…
G
@LanaKarvana agreed, but ai art and ai generated images are very different, and…
ytr_UgxajlKn5…
Comment
To be fair to ChatGPT, it wasn't implying that "I" is not the same as "ChatGPT", but rather, it was stressing out that ChatGPT suggesting bromide to the guy is based on the researcher's report of the guy's story and not verified by looking at the actual chat. And, even then, "suggest" does not mean "recommend", the word which ChatGPT even italicizes in its defense. Even in the story according to this video, ChatGPT simply pointed out that bromide can replace chloride *for cleaning*, not that ChatGPT recommended bromide intake as a replacement for chloride in diet.
youtube
AI Harm Incident
2025-12-15T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzVsm7kRPBsLoEsdah4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxwMKUzEYenkCQcZTB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzhvuHiuare3p8AQcZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz0HKlbZwxyycHUSG94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyQSI8WUmY0xgI_BEh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzN5xL2oOq_2kypmt54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwnPMtW94RMKcetO3l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyRTCeQn6boNwaTPwZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2fBU7zQxvNfiuzE94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxmlBlguu-MdFRicbx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}
]