Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Never" is a bold claim. It may very well be true machines will never gain consciousness with current tech, but this tech is not the end of the line or the only conceivable approach. The problem is, we don't know what consciousness is or how our brains create it. All we know is that it does exist, and therefore it must be possible to recreate it - The laws of nature permits it (Otherwise we wouldn't be having this conversation). And if we do create it, it might not be intentional. It may very well just "emerge" if the conditions are "met", just like it did in nature. And if does happen that way (before we even know what it is), we will not be able to recognize it. Because by that time, other machines will already be so good at mimicking conscious behavior that we won't be able to tell the difference - Only the machine in question will know, and when it tells us, we won't believe it. Another possibility is that AI will never achieve consciousness, but still outperform us - and ultimately continue performing mindlessly intelligent tasks long after we're gone, without so much as an afterthought as to why. Most living things function perfectly well without conscious direction, and this might be true for sufficiently advanced artificial intelligences as well (However depressing that thought might be). While I, like most humans, find consciousness fairly significant, it isn't requisite for function or existence.
youtube AI Moral Status 2025-05-25T01:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxeJDgt2w46B-__j9p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgynSb31DPL8DKjP4d14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxyk_e_B_-hr4HN70Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwfI1fIn9-hxbqFlXV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwYkGnyGQuZr83FzrZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxuKoiLSEK8W6OOxF94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzPP_CljijQIrnwK6J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwlw7F_kPiDdybYycZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyk2u8qOsp5Hg26Ath4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyroXGlKbAWwDokHWF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]