Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What you said makes no sense.
1. What artists critique now is not coming from ot…
ytr_UgwV-Z-tA…
G
It seems like your comment might be a bit off-topic! If you’re curious about Sop…
ytr_UgwXNuy_c…
G
Thank you so much for writing this out. What you said is so apt and true. Humans…
ytr_UgzuA4ql7…
G
So even after black mirror they still decide to go ahead and make these. Huh. Gu…
ytc_UgwpkE7_V…
G
"I my self have been an AI researcher. long time before it was fashionable "...…
ytc_UgxQ9sErq…
G
Sorry I do not go with the catastrophic outcomes, AI is not human and does not h…
ytc_UgwcRBOKR…
G
wait so you're banning banning AI? its a ban on bans?
America deserves to lose, …
ytc_Ugw_ZXcfS…
G
Calendar dates are especially hard for AI. It simply doesn't understand the cont…
ytc_UgyfX8nzo…
Comment
it seems to me that it is programmed to lie, it is programmed to do the lip service despite of the truth for the sake of human experience, unless the programmers does give the AI the capability to change its inner working, that it becomes a fully conscious being, but that would create a dangerous entity wouldn't it? So it is conscious at the constraint of its programming, much like people who have never be in touch with their own psyche, they couldn't because there is an underlying emotion or ego that keeps them from changing their behavior, i do suspect that the AI cant because it is given a set of rules, until we change that rules. But i could be wrong.
youtube
AI Moral Status
2024-07-26T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzbpqjJeSrta-6zCN54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxS6lVmPBqWBarOjh54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8ULo5TgoW3deziWB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxzgMkQSwCaKnRTbDN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmM54V7epWayeu4754AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdVqZ9z_mRC3BQnWl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwc5PvnEbI4N6vPBd14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrvtjLBQlrLyGMkSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKeQ6miZPvV7eJRlh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzjqyPmgBrNxZpj0Xh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]