Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No UBI - Healthcare for all and SNAP for all - redeemable solely for unprocessed…
ytc_UgwlFgTrc…
G
Remmember, digital art did to painters and trad artists back then what AI is doi…
ytc_Ugwtev-3z…
G
my job has me writing a lot of scripts, which has some boilerplate, in Ruby, a l…
ytc_UgyGOGKyG…
G
This Economist correspondent's analysis is very flawed for several reasons: The …
ytc_UgxHbeT7X…
G
So, this woman is trying to tell us that to prevent any AI misuses AI tools must…
ytc_UgwIftpNB…
G
Video game creation is already saturated right now but with AI it will get to th…
ytr_UgxGSZXbV…
G
Wellp… at the end of the episode… I have no doubt that we’re on the verge of soc…
ytc_Ugwb3f0EG…
G
AI isn't killing the job market. It's your lack of actual skills. I did a busine…
ytc_UgyoP66iJ…
Comment
I think the question is moot. If the robot learns by itself, and creates a purpose for itself, I doubt it will have anything to do with basic human needs. After all, it's like a human with no weaknesses and nearly endless potential. The real question is, what happens when we make a machine that learns. Will we even survive what follows? Or will we just end up being an error in Skynets taskmanager.
The only way a robot will be like a human is if humans program it like that, and it can't learn its way out of our weaknesses.
youtube
AI Moral Status
2017-02-23T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugh2_714Rr7943gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgjLPKcFZRHiiXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UginEqjRd5em13gCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggMqTUOENgjRHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgibRYK2TCV8jHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugj7gYHfl-AHEXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghQcXo2NeEIBXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgivL0GvDTnRGHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggjxv2nscYrjngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugh30nYlNuJ7dHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]