Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI showing more respect than the upright walking humanoid with the dialogue tree…
ytc_UgxrPUCZX…
G
6:34 You’re missing that video gen is a big advance toward ASI relative to text …
ytc_UgyXDx6-F…
G
AI is a tool, Can never be replacement of a Radiologist.
But with AI radiologi…
ytc_UgyZU1mUs…
G
@01MJ10 indeed, it is. There's another video with Han talking on the Web Summit …
ytr_UgxLXtUnh…
G
I know two things for sure: The AI will continue to become more convincing and w…
ytr_Ugyl3Pf-6…
G
its just a pr stunt, and this video (including this comment) are feeding into it…
ytc_UgygIHcBZ…
G
@guff9567 Are you referring to the existing reality we used to live in, in that …
ytr_UgzBqcRnP…
G
I think we need to be talking right now about the system that can replace the ra…
ytc_Ugzu7iRiW…
Comment
3 Easy steps to avoid robot uprising:
1. Don't programs robots with drives that are not neccecary to their objective
2. Do not program robots with human-like emotions, IE dont try to make a human
3. Don't create sentient robots, its unnecessary
4. Programming robots with the ability to feel negative emotions is unethical in the first place. Giving a sentient robot the ability to suffer is the same as making a human suffer.
5. If you fuck up somehow and this ethical stuff comes up, choose the pragmatic and safest option for human survival and kill the robot before its able to make copies.
6. Don't fucking program sentient robots. There is literally no need for it.
youtube
AI Moral Status
2017-02-24T01:2…
♥ 218
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ughnexcsmb3x6XgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggvblKpw1_kgXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjVEzS6w8goNXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggfO1G2FHfPI3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjcsyISv-nG-HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugio6zncMOKloXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiG4UILVf0E13gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggRzJZbiuIY0XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiAg_hJ4iN9Z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgjOBOe5JgVz5XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}]