Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes dear, AI reduces stress but can result in misconduct because a tutor can cat…
ytr_Ugw1pD2NH…
G
It's because they're being hired to write someone else's approved story. And tha…
ytr_UgxHcuw_D…
G
AI is going to be crap. No AI is ever going to create a Breaking Bad. Thank God …
ytc_Ugwrp8Qqp…
G
AI is useful, but most of the hype is coming from finance bros that don't even u…
ytc_Ugw-nsIfG…
G
AI wrote that story while physically plugged into an electrical outlet more than…
ytc_UgwoQSnxK…
G
There is no comparison between ChatGPT and Claude. I am a Claude premium user, b…
ytc_UgzmTJnCl…
G
i believe,
soon we all will learn programming
to enhance the AI for our correspo…
ytc_UgwUamxQX…
G
You need quantum level of computing to have ANY sort of intelligence that have a…
ytc_UgztY-B6Y…
Comment
This is why it's a horrible idea to let the whole world population use "AI" to answer questions... that sh*t doesn't understand what you mean by "how do I stop using chlorine" and (more important) won't ask.
13:00 Huh so it's coded to deflect blame... right off the bat, things like "that person doesn't exist" then "okay the person does exist, but I have never interacted with them". So obviously not programmed to be a decent "person", more a PR representative or other spokesperson lmaoooo (yes I know different instances don't communicate, it still funny to anthropomorphise)
Forgot to add: It's nice that they set up a warning to appear with searches pertaining to chloride - would've been even better to program the model to _ask for clarification_ if the user input is too general. That would solve the broader issue that you point out, of some people's brains filtering out keywords (like "cleaning" instead of "diet") leading to wild misinterpretation.
youtube
AI Harm Incident
2025-11-25T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzZ_lhc1jHXaryuCnZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9UKN8_t6c-B39YBV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7mYi7thHUhzxKcl54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugws5gVeGfi5aFO93kJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwlLP1cQvWzsZGzT9N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzdk9gqphheBgkTLoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5NubqnSgNH3fOohd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3wxDjisbHdtAHMdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz_1RgtmbUFjaidNQx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzerRLcQHjvHMb92bN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}
]