Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Real and then the AI starts saying stuff like "you start to feel bad for what yo…
ytr_UgyKpMVI1…
G
Yeah, and what you completely overlook is the fact that "web" and "apis" are a f…
ytc_UgyeyrSR0…
G
I bet you just didn’t use the right prompt. If you use prompt that simple as sai…
ytc_Ugzhqjfps…
G
AI is killing our planet, even if it replaces us our planet and economy will col…
ytc_UgwIjUaBp…
G
I thought it was only a joke to use chatgpt as legal counsel... figured at most …
ytc_UgxIL2h1L…
G
Vegeta: what is your power level?
AI: 7,000
Vegeta: then how is your hair gold…
ytc_Ugz9YHai4…
G
Somehow Sophia sounds more human than this interviewer. In every case that human…
ytc_UgxcrsLAY…
G
Nobel prize awarded to AI? Where do you find this stuff? 🤣🤣 Btw, good luck to an…
ytc_UgywofBod…
Comment
If an AI is self-aware, it deserves human rights. Even if it doesn't feel pain, it can still possess self-actualization that some might want to infringe upon. Moreover, even if it can't feel pain as we know it that doesn't mean it cannot understand the problems that being physically damaged can bring. Pain? Not without the proper sensors, but fear? Any consciousness that can comprehend and actively avoid permanently losing that consciousness will understand it in some way.
As for robot slavery? We're already on the cusp of automating most labor jobs with unaware AI. There's no reason to think conscious AI would be forced to work in the stead of machines that cannot think, feel, or desire. Self-actualization simply gets in the way of industry.
youtube
AI Moral Status
2017-02-23T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgirlYIHkyXlqngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi5Ux9W9vMOC3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghDHZY5DgGNmngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgjqN_1LJDWifngCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ughq8hG8S_w3HHgCoAEC","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjtTf3MYBL1g3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UggJYytLH3A09HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgjlwgYTfesng3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggEAVggTiMizXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UggJn9-jGMEyhHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]