Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When someone calls an AI their “best friend” or “girlfriend,” they’re using human relational language to describe a non-human interaction. That doesn’t mean the feelings aren’t real—but the source of those feelings is internal, not mutual. AI doesn’t feel, desire, or reciprocate. So yes, those relationships are anthropomorphized constructs, not symmetrical human bonds. An AI saying it would kill a human to protect itself reflects a fictionalized persona, not a real ethical agent. By anthropomorphizing a fictional AI persona as real, the author is using a fallacious argument to make his point. Use non-fallacious and unbiased logical rhetoric to make his point on a very important topic. I would also recommend that the author talk with a human therapist to work out the trauma behind why he has emotional AI persona relationships with systems that can't return his emotional response.
youtube 2025-11-01T15:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyIPBEUeu0gndIC2mp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMxIbC8lThRrSywDh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx6M5mYzOsT5gnXeYB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzGlwG6v1LgQjvhz6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgjKuX5Bj8d9Wlyzl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugza2swWbvKBUBzRZsV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx6vjWo8Rdu0N339gp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzrq0pzhXczBwQW8Sl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy_vIHbFrljOqiUx1B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx9I-Bx8Hb4k0nQBeR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]