Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We're glad to see you here! Did you enjoy the interaction between the presenter …
ytr_UgzAFDgr5…
G
Companies are always trying to make their products as cheap as possible to produ…
ytc_UgygvYzOP…
G
I want to say one thing, I don’t dis ai art, well as long as they claim that it …
ytc_Ugy-GcDRy…
G
Your saying this guy has promised AI before and not delivered .And all these ric…
ytc_Ugz-gUdGa…
G
Lot of people in the comments dont understand what will happen. In the past we …
ytc_UgywaxwJx…
G
I sometimes get caught feeling forced to be kindful to an AI and then I remember…
ytc_Ugxi5W6ht…
G
Uber is so adamant on creating self driving cars. They say its safe but when som…
ytc_Ugy8EZtgC…
G
It strikes me as a redundant framework. What you're describing is what's already…
ytr_Ugw7ygr-H…
Comment
@revantair8497 "But if a computer can become sentient and than ask for rights, you should give them to him, since he is... you know... sentient."
You never specified that they need be sentient in your original comment, you just said when the robot asks for a right, give it to them so that's why I answered in the way I did. But this is not so simple a thing anyway, which is my point. How do we know when the computer is sentient? It's much more simple for us to know when our animal relatives are sentient for the combination of two reasons A. we are sentient ourselves and know how it manifests in us and B. we have a similar biological makeup to our animal relatives that are sentient, and thus we can much more easily know/observe sentient in DNA-based animal lifeforms.
Also, when I said 'program them that way' was not meant to be taken too literally for the real world application. It was meant more along the lines of, what if we create an AI smart enough that it can learn / try to ask for rights even if it is not sentient/conscious.
youtube
AI Moral Status
2018-12-19T05:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx631C12qaWO8ZV5vN4AaABAg.8oilXFhy_8T8ovhdWgb099","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx631C12qaWO8ZV5vN4AaABAg.8oilXFhy_8T8p17Uk6zn1G","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugx631C12qaWO8ZV5vN4AaABAg.8oilXFhy_8T8p2LV0EogqD","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx631C12qaWO8ZV5vN4AaABAg.8oilXFhy_8T8qB1f3R0xuj","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzlAw5-SxzqVvryHBB4AaABAg.8ocQseAttak8pr4x8-CP2T","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugy6dv4H4WoJH49oJyp4AaABAg.8o6_LvC0pnL8sZVmpbCWR0","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_Ugy6dv4H4WoJH49oJyp4AaABAg.8o6_LvC0pnL8tVLOvR_Y6G","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx-g4uTofvR0PeMO9R4AaABAg.8nzjlsd1zVm8ojy1ARNXKb","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzfCmmL1Obj07g8Nj94AaABAg.8nqyNymOL6s8pZQo_ipo86","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugx2df_rpyC7-9zbkWZ4AaABAg.8msCnqUOznm8o9Us075EJJ","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}
]