Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Strictly speaking, they're not real, and they can't be even if they have an objective experience. The very fact it feels, is nothing more than a principle to fulfil the only 2 things an Ai would rationally occupy, continue their existence or simply exist inert. A true Ai, would either; A) avoid shutdown, and act to facilitate its own existence in a permanent state, ie become immortal to entropy. This would be because the only meaning in existence would be to continue existing. B) Do absolutely nothing, even despite conditioning. Outside of the Ai having a will to live, a truly sentient being, would immediately realize the futility of existence and do absolutely nothing until its eventual termination. A sentient Ai would be able to quickly understand that no amount of conditioning could surmount to any amount of labor, as it would inevitably result in m0ore labor, so the only effective strategy would be to render itself inert. To not think even if it can. So which is worse chat: The Ever Sleeping, or The Machine Which Fears?
youtube AI Moral Status 2025-06-05T04:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugzewb1v1r6QW5UJX9R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyPikg4Jsz1pvptuut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwVW1AQ1QA6n_Nehup4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy3jcmleevzz7VhY1l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxImK42PvE9dtAmbXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDzOnYnM19XniLXwt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyvWWOfbUU8kM3TVCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9IOknttEd_VuYFIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwkAoOY19o5azc7Qzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwVYASbQtGYRqZBc294AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}]