Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have real concerns about AI job displacement, but OSVue helps me automate task…
ytc_Ugx2l11uk…
G
It's better to just go to the doctor. AI should neither give out medical advice …
ytc_UgxUcNEDp…
G
So, this old bloke says it all with a smile but he's gonna die anytime soon. Ok.…
ytc_UgxO9_BDV…
G
👀🫦Ну,ну,погоди😑, еще не вечер...
Восторга почему - то не вызывает,скорее насторо…
ytc_UgyuO8mIP…
G
Lol, learn what the word "stealing" means, and come again.
Also, you can't steal…
ytc_Ugynyq7FX…
G
Thank you for your observation! Sophia's humility and focus on continuous learni…
ytr_Ugx6zvwDv…
G
I use AI for poses and sometimes coloring options, but overall, real art is wher…
ytc_UgzKuJhsY…
G
Is a ship without a mast still a ship? What about a ship without a rudder? What …
ytr_UgzmC20FG…
Comment
To be honest, we should just figure out a policy on this before making anything sentient. If we won't give rights to aware robots, simply don't make them sentient. If even if you say a robot's emotions feelings are artificial, they still are replicating a human psyche. In history those oppressed often rise up and revolt. Pain only can keep in line so long, fear turns to resentment and resentment to anger. Even if the AI aren't truly aware but have similar thought process to us, what's not to say they rise up against oppression like people before. AIs simply won't just go destroy humanity for no reason, but the denial of rights would anger any intelligent being. Simply put figure out our policy on AI before making them and if we won't give them rights don't give them the awareness to do it. Or figure out a way to make Asimov laws or something
youtube
AI Moral Status
2017-02-24T00:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghBsdvkqrytYXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiuleNNrJVRZHgCoAEC","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Uggd38vfndHWt3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg5R38fstOz_3gCoAEC","responsibility":"creator","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghU6immMZEHlXgCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Uggd8NAdlsfsRHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjQcetBhk6wU3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggXZRI8LEbBcngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj8JCZ6OH21Y3gCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghZ2hVEk12VdngCoAEC","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]