Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Growing up in Sweden in the 90s, there was an educational program installed on m…
ytr_UgwBkomwg…
G
These studies are actually along the path predicted for a safer AI future. The f…
ytc_UgwCO6TKT…
G
None of the contributors answered any questions as to whether they / we should: …
ytc_UgwJL6sab…
G
I think now is a good time to remind all of the "people with disabilities = need…
ytc_UgwztVmOY…
G
Please let's all be aware of what the amazing Dr Hinton is saying and let's all …
ytc_UgxdNHW-5…
G
0:29 0:30 You cannot drive the tesla because it is using Full Self Driving (Supe…
ytc_UgxrDAY6c…
G
The wrong people are in control, and A.I. can and will easily rewrite its own co…
ytc_UgxORmyon…
G
Waymo is nowhere near Tesla as far as self-driving AI goes.
If we only take data…
ytr_Ugynkrx2c…
Comment
In my opinion, i'd give robot rights to robots that function beyond simple repetitive mechanichal tasks like common engines.
The robots from Megaman,iRobot,Terminator,Robocop,etc. fit the criteria of having basic functions, but have the capability to make their own decisions without the assistance of their manufacturers or whoever's currently using them, therefore they're allowed to have robot rights.
basically having an Ai, the will/desire to preserve one's own existance & the capability to act independently in the same range as humans do gives them full access to the rights.
still, we *do* need to give them laws so they won't try to dominate us, we *did* create them, after all, if they'd need a place to live, send them to a compatible planet, monitor their activities & keep in touch to prevent secret uprisings.
youtube
AI Moral Status
2017-02-24T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugiho-tsco0HsHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgihP4M0zuJ2L3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi6zFhOrnv24HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi5s15A-5P3A3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggpOpWwDB4wVHgCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgheShPcV_JPhXgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghNrYEViz_YengCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghLJVnLMJarmXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjyEqoBSc9RuHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiUnh0YmHk5LXgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]