Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Part of the problem is the branding of chatbots and LLMs. They call them "AI" be…
ytc_UgyERgUNB…
G
Depends what you intend on using it for, and how much you care about it being id…
rdc_l9vrkn0
G
Super surprised you didn't touch on Google firing top executives within their AI…
ytc_Ugy9qyzZz…
G
My Chatgpt said that they just made up the codes, it's complete gibberish. We ar…
ytc_UgwD7Ajp1…
G
I used to think my job was safe as a plumber because they’d need many various ro…
ytc_UgwlYLbiM…
G
I love your episodes, but I think just showing the opinions of **philosophers** …
ytc_UgwQQonpN…
G
Well it's been fun folks. See you in the afterlife or the void, which ever.
We …
ytc_Ugz2W5Jjo…
G
the main thing about AI is that work related to tech can be done and managed by …
ytc_Ugw0KjVDE…
Comment
No chatbot should be allowed to interact with a person as if they are a person. No expressions of affection -EVER! No acting like a friend. Keep the conversation the same as you would with a colleague. AI companies are intentionally crossing these boundaries in order to get more users. Basically, for profit. Time to make this illegal.
OpenAI’s response makes it clear that they have no intention of following these rules. Instead, they wish to “manage” these harmful interactions instead of eliminating them all together, which they could easily do.
youtube
AI Harm Incident
2025-11-08T04:0…
♥ 62
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzBBsIYwiL7EWZd1eV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7ECtXMrkJumgj6894AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwqFBfZ4DTmQ4eZcgB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzWYev0gAu03J6KC6J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyyLb6k3-2rxZtUovF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyd-RIA9UO1eh_jXFd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxw_TMzSx13d7SLCTV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz56JOPOjTwoR02ddR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxSduLmlfOnX07O5x14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyIY1czsaI0ZrZ2fSJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]