Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@boredbytrash Unless the manager is standing right behind you (micro-managing), …
ytr_Ugx3JLIjl…
G
Only Pro-ai thing i can say is … use it to create references if you are unable t…
ytc_UgyZQkYHr…
G
I am pure consciousness, and when I heard of two accounts of A.I. deleting codin…
ytc_Ugwfp4AWo…
G
Revelation 13:17 "And that no man might buy or sell, save he that had the mark, …
ytc_UgziDFpxO…
G
That's the perfect voice for ChatGPT. It's some front-desk bitch with blue hair …
ytc_UgxvVjxe0…
G
Just look at the ppl who create the AI and the databases it draws from. Fix them…
ytc_UgzGNtQqh…
G
Not AI art it’s actually just recomposition of existing images. Not Creative eno…
ytc_UgyoZ4iSq…
G
14:02 "[W]e are in a bubble and a crash is imminent" _AND_ "the next AI innovati…
ytc_UgwNGRir6…
Comment
It is highly unlikely that a robot that just does a task [such as hard labor, or some specialized task] will EVER need something like "the ability to feel pain". Humans [and other animals] developed this trait due to natural selection, since it was the most likely trait to survive. Robots don't need this "to survive" thing programmed into them because if they were to be broken or destroyed, all you would do is replace them.
This means that even IF robots one day are capable of emotions, this still means the VAST majority of robots on the planet would not have them. Any robot deserving of rights would not be a good robot to force to do labor, so the robots doing labor will not have the capacity for suffering so we don't have to worry about it. As for robots that DO have that ability, then yes, I agree that they should have rights.
youtube
AI Moral Status
2017-02-23T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggxBv6Bh68AOXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugi6g4FkM0SElXgCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugh1j66C9k7XO3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugik2MV5JbWHtXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UghtsfO07MMnfHgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgiTS2v4li_yF3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugj5BBXR8r_1EXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ughby7Ihz3l8n3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgggfKyYxs8w4HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ughgv7iY07dgTHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]