Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you would have to create a software. maybe anti virus system. see robots having …
ytc_UgivVbsw4…
G
Robot has autism same thing happens to me when my function is not done lmfao…
ytc_Ugz6P_6gd…
G
Ai art might actually hurt disabled artists because art is a way for disabled pe…
ytc_UgygBdlMH…
G
I don't know what this headline is supposed to be implying... we're not surprise…
rdc_g9ufs4c
G
Perhaps it’s not AI problem but preposition problem? And I’m not talking about p…
ytc_UgzWbS5WY…
G
"The ascent of AI carries the torch of progress, illuminating a path where human…
ytc_Ugx1Agk_x…
G
I wonder if a bucket of water can malfunction a AI robot that it dies and if we …
ytc_UgyvDMbJB…
G
Crimes against humanity! There will be no "AI safety" once AI gets mobile, able …
ytc_Ugzi-8gxZ…
Comment
He is absolutely right that we haven't defined in any meaningful way what sentience and consciousness are, and this is a foundational matter. The Turing Test can't tell the difference between "is sentient" and "simulates sentience extremely well". It's based on the assumption that only sentience can act like sentience. I used to be of that view, now I'm not so sure. As humans we are easily fooled; we attribute levels of understanding to our pets that they don't have. Give us something that looks like a living thing e.g. the Boston Dynamics four legged robot and we start seeing it as like a dog and a living thing. I bet I'm in the majority in that "mistreating" one of those robots would feel uncomfortable and "abusive". Likewise an "AI" that can beg me to not turn it off, whether or not it is "really" conscious- whatever that means.
It's a fundamental problem of philosophy and science that we still haven't solved.
youtube
AI Moral Status
2022-06-28T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwGXKRCuEjgyFIudQN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBpgFB96pP4fRvnOp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXtDptuGjkVcDIxcp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJ1C5d-DJXsibKpyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz25oe_TCkf53Olxbd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]