Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The thing is about AI Image generation is that is supposedly takes less time, en…
ytc_Ugzdf3NHF…
G
It’s likely already been developed by Palantir. The IDF has been working on a sy…
rdc_o8120wk
G
This is fucking crazy. To make ai that smart. It will only go to hell…
ytc_UgxzYuTqz…
G
Yes, if someone asks who will pay if all are unemployed, asks the wrong question…
ytc_UgxySqRTB…
G
Mame sorry for your loss. Go after ChatGPT and the psychiatric drugs the same. B…
ytc_Ugwdz00fB…
G
5.9=x+5.11
Try solving for x in the above equation in GPT5
You’ll get the answe…
ytc_Ugy3SRqaD…
G
AI has significantly improved my conversational skills that took a big hit after…
ytc_UgwaWR_yd…
G
If people have no jobs and no money to buy products, then companies will go bank…
ytc_Ugwbp9qrY…
Comment
Here how robots would work if made with personalities similar to humans:
Robots must get rewarded for their work, be it battery charge, shelter, oil, or money.
Destroying a robot without the owner's or employer's permission is considered vandilism and the criminal must either do community service, pay for repairs, and/or face 5 years jailtime.
Easily repairable robots will be used for wars instead of human soldiers
Using robots for terrorism is a war crime, and punishable by international law.
Robots cannot be equipped with suicide weaponry.
Unless it was at the robot's own will, any crimes commited by a robot would have both the robot and owner convicted
Robots cannot revolt against humans and vice versa
Hate groups against robots will be dealt with like any other hate group
Robots can vote, but it must be an independent decision. It is illegal for the owner or employer of the robot to order it to vote for a specific person.
youtube
AI Moral Status
2017-09-27T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhfUbtpxRpFCg2RbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugxw29hCRkRXSp_1xQl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzxs_MaS9tOuE-ofU94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZuMj4n3MDIkIG1ql4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCNly2eYnFv9N7GZB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyiQyxfa4atkYseCmx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx7Lk0ES4Dp34m9F2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVymjGfAAf9ZSK9w14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz41nduqULPOKslKst4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyL7jLrsf5hxsujVlZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]