Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's a big assumption. Self aware AI won't need to sleep. It might not be pron…
ytc_UgwEJr3pV…
G
What is heard during this video is: "AV's are dangerous, therefore we should con…
ytc_UgxcJ6ksD…
G
You might want to sit back and take a good long hard look of all the evil that's…
ytr_Ugxtt1T7b…
G
Thank you for appreciating Sophia's balance of wisdom, respect, and efficiency. …
ytr_Ugx5VQkND…
G
Sam Altman... one of the worst human beings to exist in the current present day,…
ytc_UgxuxjsBj…
G
Define bad behavior because to me it seems like the test revealed that A.I is mo…
ytc_Ugy5zseSn…
G
Why not aim for AI which replaces the upper management, it would save a ton of m…
ytc_Ugzb072H7…
G
@Laszer271 I do understand your frustration, and in some ways I also often fear …
ytr_Ugzz5FgId…
Comment
Robots have no moral compass, they cannot feel pain nor have a conscience. We humans are, in a way, hindered by this. Example: when we humans engage in war, the "more civilized" of us fight fair, while other will kill without impunity. If we realize the idea of war is to win, and destroy our enemy it would make perfect sense to do just that. However we tend to fight with rules, rules that cause one side to be hindered. For example we have enough fire power to totally destroy problem nations like, Iran, Syria, but we don't use our full might. We could carpet bomb their interior and kill their heads of state, but we don't, because we try to be good. (for lack of a better word) Robots don't care, they could carpet bomb a nation and not care, because they can't care. We need to listen to what the male robot is saying, remember they do not lie, he said in 20 years they will do everything we humans do.....believe him! they may do everything we do, because we may not be around...they may have destroyed us n 19years....
youtube
AI Moral Status
2019-11-13T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy_dxmmncmuvzWeE6N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1xonTsVGU5oiHe8p4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxx_wJ6FtiN7JIyksl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxArRWSM0DODO8qyjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz6_YnHzUI3Ymn_AUZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwlmfTJIEiR-4gjN3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLx0fduOHFRywsRTd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwoVZ79amJM6FsmEpp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdKERlBuIvnA9zfel4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyoYO7XrjQTZyK2Mgl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]