Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have what could be considered a disability for art, in the form of aphantasia.…
ytc_UgzAjSRCO…
G
I actually still like a picture even if it's AI and I don't really see any argum…
ytc_UgxohREWK…
G
There are around 1.6 billion human drivers with two eyes and (some) intelligence…
ytc_Ugz-x6lO6…
G
the fact that the ai is almost consistent with its animation, which is hard for …
ytc_UgznthEvf…
G
I think it’s a dangerous mental illness to ask AI anything considering its prove…
ytc_UgzxWmjwX…
G
What a fucked up time we're living in virus to kill off the masses making AI the…
ytc_UgwXuKmUl…
G
“Help, I can’t stop making this potentially harmful thing that will surely be mi…
rdc_jkej7rf
G
Typing a prompt into AI is equivalent in nature to commissioning art from an art…
ytc_Ugwp3ikN3…
Comment
There’s no such thing as verifying the sentience of things that are not you. You can only directly observe your own, AND ONLY YOUR OWN, sensations, full stop.
I’m tired of hearing us talk a big game of how do we prove an AI is sentient when we can’t even do the same thing for ourselves.
Cut to a couple decades from now, and there might just be robots saying “oh humans can’t actually feel things! It’s just a series of physical and chemical reactions facilitated by ion channels and neurotransmitters! There’s no actual understanding going on! It’s just chemistry!”
youtube
AI Moral Status
2023-08-20T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz4LmweJWCyvg_WLT54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlUAjvg07Gfn40e_94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyE2bKu9n3YwAqQ3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8pVMDZ8MhE1Gyf8Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwE0UsrNw6z2lzTUHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6cwHuglGeZ4GYB-p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMCNAnH3scCR_NkDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw071Ztqhkg0exG7p14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNbshG2oVOuTOZo294AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsFiHWjq4cPPisKdl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]