Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There exists a very interesting novel about a "spontaneously" appearing AI: "EXE…
ytc_UgzluMBKb…
G
> While India is the point of transshipment, trade data **suggest that Malays…
rdc_lu8cgjq
G
I will say I think generative AI is an interesting technology, some people have …
ytc_Ugy7DymVm…
G
This is one of the most powerful, truthful, and enlightening interviews I have e…
ytc_Ugx27Y_wz…
G
yeah this kinda thought is why im not really worried about it... ai art is soull…
ytr_UgyhNElvQ…
G
Ted Kaczynski did an awful thing but he wasn’t wrong. And he was also an MK Ultr…
ytc_UgzTFC_CM…
G
Elon musk already say that AI will be the most dangerous men made in the world…
ytc_Ugwejr2Ho…
G
The problem is that we don’t know if this video is AI or not. This is bad……
ytc_Ugz3UXwM8…
Comment
Program the robot with a reward for working, not a punishment for not working. Unlike humans, robots can be pleased by the press of a button if they are programmed to be. With humans, it's hard to give them an incentive to work for you without denying them other forms of pleasure. Some exceptions, of course, would be endentured servants and children. Robots, on the other hand, can be programmed to learn how to make its owner push a button as many times as possible. They are also physically superior to humans and if they are programmed to try not to make their owner press a button, they could just deny their owner access to the button by either stealing and hiding it, or they could kill, disable, or enslave the owner. While they could cheese the reward system by pressing the button over and over, this could be avoided by using biometrics on the button. If you are to create a disciplinary system for a swarm of robots, I would make a reward system and probably not a punishment system. I would probably use both just in case.
youtube
AI Moral Status
2019-05-25T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzaERbUY6aNv0bDWn94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyKYdbdZLIQtogqrGR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzScYccHO5Bt5h9B714AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyesJQ9EnB4XZnPZyF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw6S4JWM68GaEcAKa94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwsK3U1Js6lsqygvYZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNmFP6KjDf5Rwnv6Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCFn4HpjcCAWJbW214AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyc49PobndzcEhmq7t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzPhNMxQ5gyc3xTg8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]