Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI makes conclusion based on analytics and not emotions or needs of social valid…
ytc_UgzcmLlxl…
G
@HiddenRainbowsart You misunderstand how the technology works. I totally agree w…
ytr_UgymmmcQ6…
G
I like ChatGPT too! I actually did have to use it when I had health anxiety! It …
ytr_UgzsM8g5c…
G
Yeah I went to school to be an epidemiologist in 2021. According to BLS, it had …
rdc_o4kwfjw
G
I don't understand why ai apologist are so fixated on "drawing faster". People d…
ytc_Ugw_2QBux…
G
This. Maybe they stopped hiring because there is no meaningful reason to hire. A…
rdc_m26ywss
G
The reason I don't expect AI to get much better (at least in the form we have no…
ytc_Ugxa3JNCI…
G
Tom, if 90% of human jobs are eliminated, how will AI entrepreneurs become enorm…
ytc_UgwD9gpok…
Comment
It's getting easier for AI moral dead heads to design robots that appear to be human. A lot of people are so emotionally shutdown, they are already indistinguishable from robots. Just listen to most US military or police spokespeople, or for that matter to most economists, engineers and scientists. They're as creepy as droids. And most TED speakers are droids on happy pills. Hardly a trace of genuine human expression to be found; but plenty of artificial displays of emotional responses exhibiting what they think their audience wants to see.
Why shouldn't robots have the same political and legal rights that these self degraded human beings have?
Those who find companionship possible with the newly manufactured droids may have already been satisfied in 'relationships' with human droids.
The Turing test is obsolete. Machines haven't won; humans have defeated themselves by losing their humanity.
youtube
AI Moral Status
2017-04-29T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgivtIdgVocyFXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgiYKx3M8o_e-XgCoAEC","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiMsYLdAJDJn3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgibSee-lUws8HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UghV74iRtFuu9XgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiZm51uxnz_23gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjpMHMadcrd5XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugi45nUFMM_AvXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghOeUpr1CRDF3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiJ0QWK2DdTGHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]