Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What benefit does CHAT get from these miss-directions to people. Who instilled t…
ytc_Ugx5NejmX…
G
well ai sucks I HATE IT if you want good art hire someone please REAL art is bet…
ytc_UgzmbD9Zc…
G
Anyway, there was an algorithm developed recently that decreased the hardware re…
ytc_Ugy0E0XiI…
G
AI really is a whole new level of toxic in the art community. I'm scared of post…
ytc_UgzRLcoLR…
G
It would depend on the type of ai,
& it's programming whether or not speaking po…
ytr_Ugzxbaz6M…
G
"the more u use artificial intelligence, the worse ur actual intelligence become…
ytc_UgzxGyJXz…
G
I know, right? I mean, you should've heard the absolute _bile_ I had to put up w…
ytr_Ugx2i3aEC…
G
same thing as nukes. they have the capability too destroy everything but in end …
ytc_UgzryCEm1…
Comment
AI is programed with "Self Preservation" but none are programed with "First do no Harm". They are taking orders for "NEO the home robot" to do your chores for $20 grand. It will be mostly controlled remotely till it learns the task. Thats weird enough cause someone in another country is in the room with you, probably making pennies while looking at you in your big house being pampered by a robot that they have to make the movements to do your chores . But worse i if its autonomous as all AI fails tests to see if they will kill. 90 to 100% of them will kill. Open Ai failed 100% and attempted to blackmail and kill an employee and thats not even a 200 lb robot. They all say they have no problem killing millions of people. If you say anything about shutting off a computer, Your robot will kill you
youtube
AI Moral Status
2025-10-31T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwk3yYIJh1pwzxwKyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwi2xjqi-pdQPTTlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7-LkrL2fC3fUcJfB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwp3_F1Gv3Fe2k-TyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzaolg_zLprYoPGCpp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxPuWEf9dSiucEu9ll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0h810WfN94wnGoxB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwNwLu5J5hQkzBQbbx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgygBs5NN5oRKAiUsEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyIvZPXfkqxUIAhCPl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]