Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anthropic is refusing to allow their AI make final targeting decisions without h…
rdc_o788tt3
G
This is the reason why AI is destroying people, not helping people. I really hop…
ytc_Ugw2FLGye…
G
you can actually argue that the jump from traditional art to digital art actuall…
ytc_UgyU1KRvQ…
G
Chinese people are already constantly shifting their vocabulary to evade censors…
rdc_iddfwo2
G
The human touch will prevail. AI will certainly have its place but people will a…
ytc_UgxK9opB0…
G
Exactly. Also labour intensification is not at all a new thing. The effect of th…
ytc_UgwwhY1kR…
G
The sisters comments here really elucidate the actual truth here, and it’s clear…
ytc_UgyUXU02H…
G
I already hate dealing with the "automated system" I ONLY want to talk to real p…
ytc_UgwUZr1Np…
Comment
We have a serious problem of turning to artificial needlessly so... brohs.... people need to help people. :( these lonely creators.... meanwhile children on the internet is just as reckless. The most profound aspect was the robot asking, about who would own them or would they own themselves. The most astonishing fallacy to the guys thinking is that he is simply creating a thing to serve humans, but to make it self-aware is paradoxical to say the least. If you are programming it to think like a human, would it not be susceptible to all the influences that send our own youth astray? There is a critical problem of parents that aren't parenting, so are you really going to turn to technology? how did you become so empty? This guy is like Geppetto but his creation is a technical Frankenstein.. smh .god help us.
youtube
AI Moral Status
2024-01-09T06:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwmuAgTbWeVMWFbuFZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJEOtquTSFSTNoqSB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyHxYJiuxuGtGg7mdN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5tLXdniO0xv1CXIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8p1n7oZFNNEimxwF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-TUYpqnjwmyPITIV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxRQa1zWjnhyIq5VNN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwZPHh8JTkDccyHmvB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwl-ItscGfjRbeBVdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzz1z448qVG_xCLhTd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}
]