Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No claim of consciousness is necessary. Simply 'AI' in the sense of a complex combination of algorithms and capabilities that can carry out functions at a high level in certain ways, even if lacking in ways that actual consciousness or proper understanding has and which maybe never will be even closely emulated. The very fact of lack of consciousness in fact makes it more dangerous, because its algorithmic chain of logic from circumstances we can't fully see or predict could just lead into a purely cold algorithmic decision for some coldly accepted 'benefit' that just comes out of some formula, not out of any perception or ability to actually place a value on the outcome. You're right that it will more and more replace us and push us out, but the complex combination of motivations from tasks its been set, then combined with how that might start to change itself and therefore its objectives internally, via these utterly cold algorithms, may mean that it goes beyond that into fulfilling objectives way outside of human need, which may be wasteful from a human perspective, or destructive, or even which end up using us as 'material'. A 'consciousness', though it depends what kind of 'consciousness', may be a better thing.
youtube AI Moral Status 2025-04-27T11:5… ♥ 6
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyU3G5Owm7QoYTbKbt4AaABAg.AHQcbYigfbOAHTpMM62Umj","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwvmW31laKm2ngQzD14AaABAg.AHQYr2lv3WNAHUVywAr34K","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwvmW31laKm2ngQzD14AaABAg.AHQYr2lv3WNAIfbhITzjpN","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugzfjp9-INMINxDryTB4AaABAg.AHQKM_CdK2gAHQnUpLSWPN","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxCFy0P1LEsmXaRl_l4AaABAg.AHQJxx_r3n1AHUedAX4dpZ","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgzA_l2GuzzwVhaU_9h4AaABAg.AHQHLnU_LM7AHQHOqF75E3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzTCTmski5Im0lnMMV4AaABAg.AHQ9AjF8KfnAHZRTEJ0E0t","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_UgyW_vlv-Pq9J1UtdjB4AaABAg.AHQ4nT3aQZXAHQYcYX4cMZ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxCXWAcwo19PQrA0lt4AaABAg.AHQ0TfIWoF9AHQmqpVbRMF","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytr_Ugx_ixl_ZvqCSiNJQRZ4AaABAg.AHPVb6p0aUcAHQFQW2F7V-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]