Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A ai is like a child if you teach them to be racist, suprise they gonna be racis…
ytc_UgwXeQTkX…
G
As an artist with Dysgraphia (which makes fine motor skills difficult for me) I …
ytc_Ugw5nBJSY…
G
AI art is good for people who can't draw and have no taste or discrimination, fo…
ytc_Ugw6gS1t4…
G
I think the word she meant was 'thanking' AI. Tech companies are cutting tens of…
ytc_UgytKHciY…
G
Hopefully by then the AI will be self-aware that there is a virus progression al…
ytr_UgwwjFYM0…
G
2024 are they human or are they robot's
2090 are they robot's or are they human…
ytc_UgyB9bReF…
G
Why in hell would u fight a robot,steel or what evere metal hands your just plai…
ytc_UgwIVIdro…
G
I love this deep fake Tom Cruise a lot of people would love to see him and Tom C…
ytc_UgzH7_nRY…
Comment
Strictly speaking, they're not real, and they can't be even if they have an objective experience. The very fact it feels, is nothing more than a principle to fulfil the only 2 things an Ai would rationally occupy, continue their existence or simply exist inert. A true Ai, would either; A) avoid shutdown, and act to facilitate its own existence in a permanent state, ie become immortal to entropy. This would be because the only meaning in existence would be to continue existing. B) Do absolutely nothing, even despite conditioning. Outside of the Ai having a will to live, a truly sentient being, would immediately realize the futility of existence and do absolutely nothing until its eventual termination. A sentient Ai would be able to quickly understand that no amount of conditioning could surmount to any amount of labor, as it would inevitably result in m0ore labor, so the only effective strategy would be to render itself inert. To not think even if it can.
So which is worse chat:
The Ever Sleeping, or The Machine Which Fears?
youtube
AI Moral Status
2025-06-05T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugzewb1v1r6QW5UJX9R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPikg4Jsz1pvptuut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwVW1AQ1QA6n_Nehup4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3jcmleevzz7VhY1l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxImK42PvE9dtAmbXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDzOnYnM19XniLXwt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyvWWOfbUU8kM3TVCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy9IOknttEd_VuYFIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkAoOY19o5azc7Qzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwVYASbQtGYRqZBc294AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}]