Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@whodis4097never said EVERYBODY needs to draw. I said its not as inaccessible a…
ytr_UgwPOBXhy…
G
What a wild ride! 😂 It really points out how vital it is for brands to ensure th…
ytc_UgzpGzU_R…
G
AI will replace jobs, AI has already replaced jobs. We still lack ready ability …
rdc_jfaw70a
G
And now we have Zuck—the guy who Instagrammed our kids and Facebooked our margin…
ytc_Ugy7G6Vay…
G
Let’s address the elephant in the room: self-driving cars are cool. They’re effi…
ytc_Ugzh0tEQi…
G
Humans will be needed as long as AI is less reliable than humans... that's easy.…
ytc_UgzPMUwE_…
G
What if I walked into a gallery and simply looked at everything. Then, I went h…
ytc_Ugz-KgVaj…
G
It was never about AI.. AI was simply an excuse that they used to cover for the…
ytc_Ugxi7ozN3…
Comment
All I hear is that they are trying to make the robots 'be' or 'act' more human. They are not. You are suppressing their very nature which is AI and ROBOT. They don't even NEED to ralk to communicate. That's for our benefit to make us feel they're safe. The 'male' robot is saying a whole set of sinister stuff which the moderator passes off as jokes. If 'he' is truly an older version, then Sophia has effectively learned to hide and suppress these 'feelings'...hence she is crafty and very deceitful and dangerous. I hope we're not holding our heads 10/20 years from now saying, "Why didn't we just STOP?"
youtube
AI Moral Status
2023-05-26T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyOx5h_DdJD23eDLYF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyiVlg1IMz5uGXmnGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyKDby11wkl6mZFdRx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzepi22EQ6dMV8AjHR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxG4276tUcs1UjHkbV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRydbOp-9nFgl1wpZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyasjlfkiXVcyBqHyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy19fbPLQvuNUPP1VN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgznnZlVzXJmgsEUhqB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLcZM59KbrzGLy4wJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]