Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A thought experiment. Trigger warning for sensitive people
One day your car lic…
ytr_UgzLDhXF2…
G
I wish people just used ai to help them think of ideas, not steal others work. A…
ytc_UgyXeD0lk…
G
@hoong_ry Yes, AI is not that good yet unfortunately, but it will get there eve…
ytr_UgysnkDi-…
G
Plummer, fireman, maybe car repair, mountain guide and recovery, plane pilots, I…
ytc_UgxcQAaq1…
G
That’s cool, imagine someone unknowingly write about my life story, and actually…
ytc_Ugz8hZ7ew…
G
Billionaires are going down. That’s the point. They are nervous, that’s all. The…
ytc_Ugzwe76Nf…
G
>a severe revulsion for AI content of any kind
It's currently a marker for w…
rdc_my5wayi
G
How about a UBI based upon the dividends earned by shares of the companies that …
ytc_Ugyh_LW67…
Comment
No. We shouldn't. And as a matter of fact, we should do our utmost to make sure they don't develop emotions, let alone self-awareness. Why wouldn't our machines eventually feel threatened by us on a fundamental level? Why wouldn't natural selection eventually apply to an artificial organism? For God's sakes, people. We're discussing the ethics of robot slavery when the bigger issue is that we're developing artificial intelligence and we have the arrogance to believe we can control whatever comes out of that research.
There are two fundamental choices here:
1. We develop artificial intelligence and for some arbitrary reason, we become a utopian society.
2. We develop artificial intelligence, it develops into a superconsciousness of such magnitude, we can't comprehend it, and it kills us -- maybe even by accident.
Why are we taking this risk? There really are only two things that can ultimately happen here, and one seems to be motivated by blind optimism.
youtube
AI Moral Status
2017-02-23T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ughl60i8UNuZ33gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UggsejZHpbOIVngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgiV1E7PukZT4XgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggKzW2ciku-zXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghweSfI0nYVWngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg8fO3M_unSGHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugj7RNLCjEBWvHgCoAEC","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh569BJcttDZngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggWTQfrxUnEB3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiDys-YlChdGngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}
]