Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly I'm so tired of the "Well by YOUR logic.. Digital art isn't REAL art, h…
ytc_UgxvzKz1V…
G
And here we thought AI was gonna replace us. In the end we’re replacing it 😭…
ytc_UgwYB09ND…
G
There is a God hence AI will never be truly sentient. What our consciousness rea…
ytc_UgxVUmb6I…
G
Creates the black wall and rouge AI without even knowing it. Just wait until we …
ytc_Ugy3dHeRg…
G
i just found an ai art subreddit and one of the rules is "Share original SFW ima…
ytc_UgyscK7Io…
G
i was like wait i reblogged something like this on tumblr how did i not catch on…
ytc_UgwtFXuGU…
G
If speaking bluntly means prioritizing "progress" over people's well-being and p…
ytr_UgyqKgGvB…
G
Mentioning the ethics is important because I often see a counter argument talkin…
ytc_UgzW0PTY9…
Comment
Hi Hank, speaking as a computer scientist, unfortunately, the arguments put forward in this video just do not hold water in reality. There are REAL and PRESENT dangers of AI that exist right now, and are not contingent on the fantasy idea that they will develop consciousness and decide to kill us. The dangers are in how humans today, in power today, choose to use and sell these things at the expense of our society.
Talking about the AI like it can become all powerful, pushes the same narrative that the AI salesmen do. "AI Will Become All Powerful" is their battle cry too, and it is how they justify what they are doing. That is the lie. AI is software and faces the same limitations as other software, which can be used for good or ill.
youtube
AI Moral Status
2025-11-02T19:5…
♥ 26
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjZyTJQdV33bw0vop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCMEtyTtZwynwkXrV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxh3riF0-4UK4etQ0d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw7UPSqMIu1xFiIUSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzrp8HbL5oyccS7tDh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJZ5WYBWtWhye6KXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7_T-EMPRxzTRgF_N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2ds2xE56wcAnbRrZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRM1UtUh06iVVjG654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw22W8hz_3dOr8fC7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]