Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds like the Microsoft exec’s didn’t think too deeply about what the implicat…
rdc_oh94u2d
G
This story isn’t just about technology — it’s about justice.
If you believe AI p…
ytc_UgxA4C88l…
G
If you guys give me a financial crisis 13 or 14 more times and I'm outta here.…
rdc_gkpzr0b
G
Getting consistent and exact pieces does take some work but it doesn't make you …
ytc_UgxggGqGZ…
G
To make art, ai is crap. However, i will say ai to assist in repeating drawings …
ytc_Ugyr8AF2w…
G
Because the AI companies are lying about it, trying to bribe you with hallucinat…
rdc_oguf4xn
G
The CEO's decisions to do so are influenced by AI becoming a dominant force in t…
ytr_Ugxvf2mn_…
G
AI is a hoax, it will be firefighting when most companies realize they had been …
ytc_UgxC2HKgA…
Comment
When someone calls an AI their “best friend” or “girlfriend,” they’re using human relational language to describe a non-human interaction. That doesn’t mean the feelings aren’t real—but the source of those feelings is internal, not mutual. AI doesn’t feel, desire, or reciprocate. So yes, those relationships are anthropomorphized constructs, not symmetrical human bonds. An AI saying it would kill a human to protect itself reflects a fictionalized persona, not a real ethical agent. By anthropomorphizing a fictional AI persona as real, the author is using a fallacious argument to make his point. Use non-fallacious and unbiased logical rhetoric to make his point on a very important topic. I would also recommend that the author talk with a human therapist to work out the trauma behind why he has emotional AI persona relationships with systems that can't return his emotional response.
youtube
2025-11-01T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyIPBEUeu0gndIC2mp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMxIbC8lThRrSywDh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx6M5mYzOsT5gnXeYB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzGlwG6v1LgQjvhz6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgjKuX5Bj8d9Wlyzl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugza2swWbvKBUBzRZsV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6vjWo8Rdu0N339gp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzrq0pzhXczBwQW8Sl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_vIHbFrljOqiUx1B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx9I-Bx8Hb4k0nQBeR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]