Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can someone explain to me why exactly this is a bad thing? I am not a trump supp…
rdc_dcwlaud
G
@neiklen4320 Where is your empathy for the AI engineers who spent hundreds of …
ytr_UgwJImAiM…
G
they had to use Claude to plan it, these guys are freaking idiots and they fired…
ytc_UgyfJXMNL…
G
The majority of us will be of no use, there will be no need for us anymore, we w…
ytc_UgwGP69yK…
G
Good try YouTube. But your algorithm can't stop me from summerizing your videos …
ytc_UgxgVeA3U…
G
Well, for starters, AI's whole purpose is to emulate all other art mediums which…
ytr_Ugx571oPb…
G
Exactly. Therapy is expensive. And lucky you if you find a good therapist. Ai is…
ytr_UgzO-Jida…
G
I'm one of those teachers who has used AI to help with teaching - ironically, to…
ytc_UgyJoa4Ut…
Comment
Regardless of if the robots are preprogrammed or using AI to create their own responses, if people freak out when someone says the word bomb (and can be punished for it), who in their right mind would allow robots to say they will take over humanity? On the other hand, with humans being very greedy and selfish, it would take a “miracle” for a human invention to get out of hand and become their own masters, above their creators. Therefore, if humans become obsolete to machines, their creators would have to know or accept it. To counter my counter, Albert Einstein helped create the atom bomb, which became worse than he imagined. If that can happen to him, these robots could get out of control as well.
Either way, the smart thing to do would be to not meddle with this, and abandon plans to play “god”. What is the real benefit to these robots? And does the potential for good outweigh all negative consequences whether realistic or hypothetical (such as them being able to take over the world)?
youtube
AI Moral Status
2019-12-17T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyeR4KLvvM7LUtxu5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzt7F2BF6licCQZmOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzxSwIP1ugOvjg8S3Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxP_TowzMXp57wlVLV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugwxj53osEh2ZnEsavR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRwE_c7n2tFGi-xYx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwKul_WXzsu1mv9n2B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxUAEohJLGN2PwoAAh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXXFna14UVlIUVESl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgyGL0tm7EOWG2YqKZR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]