Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
lines of code are still objects in reality, and they can cause a cascade of effects based on how they interact with the rest of the world. regardless of how you feel about metaphysical unknowns like sentience and consciousness, it's obvious from current AI systems that the lines of code that compose them are quite formidable. they can take some kind of objective, specified by human users in natural language, and take actions which further that objective (e.g. writing a computer program that does what you want). it's easy to imagine a slight variant on this technology, where the system has been trained to pursue some goal regardless of how it's prompted by human users. at that point, it doesn't matter whether in some metaphysical sense the model "has desires." it's still capable of taking the world as it stands, and steering the future towards some kind of convergent outcome. this is, in effect, the pursuit of goals, and really a rather central example thereof, since the goals don't change as you change the model's context. the central problem of AI safety is: how can we ensure the goals we give AI are aligned with humanity's best interests?
youtube AI Governance 2025-10-15T20:2… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOJw6Ow-57O","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOroJ4CwzpY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJVBpDg55d","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJg0pXwrqk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOJT8rLlC-A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOK35n-HOAy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOKY-w_769Q","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOLn7VR94Yu","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgwD6mxL7-9JP2eZp914AaABAg.AOJ6GCEnRAKAOOUZdBK_WY","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyY5iyOMTQCJJ3XLsp4AaABAg.AOJ0qCM6cT6AOLA_D6i4Mk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]