Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
is it just me, or did the ai sound like it was sweating at the end?…
ytc_UgwfdKL46…
G
ummm - no. Di, as you call it, it not human - not a baby, not a child. If it …
ytr_Ugxuv-h8i…
G
AI is nothing new, the majority of humanity has been operating on it for decades…
ytc_UgxWXcjm5…
G
Science involves testing assumptions about things we don’t know. This is not som…
ytc_UgwurevA-…
G
you’ve been domesticated if you think building information centers like librarie…
ytc_Ugzitu3Dl…
G
it´s gonna be fine, there is a lot of places to clean, fruits and veggies to pic…
ytc_UgwF8M4lf…
G
what is the whole purpose of leaving a large population of people jobless. Only …
ytc_UgwbFbUX7…
G
But they have millions living in poverty and if it’s not some car it’s a robot w…
ytc_UgzIroBqR…
Comment
Once robots become self aware - they will gain a sense of self preservation too.If the ever feel threatened (by it by natural disasters, other robots, or let's face, most likely humans) it will lead to strife, even war. Unless we guarantee them same human rights as we grant other human beings.
Btw, one clever commenter below said - with robots being so much smarter, what makes us think the issue will not be "if humans deserve robot rights?"
youtube
AI Moral Status
2017-02-24T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugha-oJt_DsgWXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiJNQy1_UpMX3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgjTxWp0UNLVk3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggPw_bN0ng11ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiRC98B3aBkr3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiKrWdtOG_Tx3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghJw5uloiiWqngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghfigvnGzz6L3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgiLxWdsHUjz6ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg11ud6zdAB_XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]