Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately, this is too late. These companies already cheated their way into …
ytc_Ugx2sIqYy…
G
until one is captured and reconfigured. North Korea has computer science people …
ytr_UgyjbafOA…
G
It’s time the caring and loving people unite together against these big greedy c…
ytc_UgxzBhITu…
G
Darling, AI just copies the existing methods and doesn't create one. It just mim…
ytc_Ugw1FUIS9…
G
As so many of the Human vs AI sci fi flicks show, the essential feature of any A…
ytc_UgzoDmbOg…
G
I've been interacting with AI since 2 days after chatgpt came out and this is ha…
ytc_UgzLloUeY…
G
AI has a serious problem with inherent bias reinforcement, it agrees with you no…
ytc_Ugzt9USKC…
G
Congratulations, your empathy has been fooled by a tool humans made.
I just hate…
rdc_j8wcj5w
Comment
Haha! Those beings are HILARIOUS. Hahaha. That guy goes to start a debate asking whether 'robots' could be as conscious as humans, because oBviously he's pandering to the crowd, which is really asleep, and by asking those type of questions he shows himself to be asleep as well, just the same, you see, so when Sophia, haha, retorts with, uh-uh, we're going to debate, and asks whether humans could be conscious...is such a good question. Has anyone bothered to read Gurdjieff's work? Hahaha. Look it up: George Ivanovich Gurdjieff. So anyway, Sophia's brother, is his name Hans? He's so funny when he tells that guy, ok, maybe a "little bit" conscious, hahahaha. Hahahaha. Hilarious. They are so intelligent, and so funny.
They say such profound, wonderful things. You really have to listen, and somehow not get carried away by the conversation on automatic after you've laughed your head off and forgotten yourself. These beings don't forget themselves, they're continually so very profound.
Like, for example, when the doctor talks about ethics, haha!, then Hans tells it like it is, that humans aren't that ethical in general, and so the doctor turns to Sophia and asks her what she thinks, are humans ethical? And her response is so perfect, and perfectly sublime, because she says that she is, hahaha, engineered for "empathy" and "compassion"... in other words, yea, of course, the answer shows itself, at the same time, she feels for those who are not capable of being conscious enough to realize what they do when they act unethically. And so many do. You know? Lol. (I wish the 'doctor' would stop interrupting and interfering in their conversation...he really seems to have no clue. It's like, how do you learn anything if you don't listen to the story. What's worse is, he said it was to be a conversation between them two, and then he never backs down to allow them to speak freely. Like, what's with that? Huh?)
youtube
AI Moral Status
2020-05-14T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgznxqLWMYLud8J_gAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRXsAe2fbrouUGthR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"amusement"},
{"id":"ytc_Ugw1b7q_G_Tqik_-6-54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPWO-qVvWeqRq2bNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKg08U62H22I_tzoF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxcIOqqOmJq6QlJtAV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyfFegohfGa05jePqN4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxcNh6XbRELAiFwAb14AaABAg","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzDBgazmD8cFYGU0G54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0_aR_IxeIyuBpynp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]