Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@radscorpion8 It's not unreasonable to imagine that a super-intelligent system can be designed to check with its handlers from time to time. Let's ignore for the moment the challenge of choosing trustworthy AI handlers to set such system's directives. Let's imagine that control is in the hands of the most competent and ethical people. Let's imagine that during one of these tests the humans "in control" decide that they wish to modify the AI system for whatever reason. Knowing its handlers to be less smarter than itself, the AI system has an open field of possibilities: it might comply, it might feign compliance, or it might wrangle the control back. Since we are already seeing scheming behavior arise in less advanced systems, the general confidence that AI labs will be able root this behavior out while racing to build superintelligence is very low. Again, it's not impossible to build a superintelligence that strives to keep itself aligned with human goals. It's just harder than just continuing scaling these systems at whatever cost.
youtube AI Governance 2025-10-16T18:0… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOJw6Ow-57O","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOroJ4CwzpY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJVBpDg55d","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJg0pXwrqk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOJT8rLlC-A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOK35n-HOAy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOKY-w_769Q","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOLn7VR94Yu","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgwD6mxL7-9JP2eZp914AaABAg.AOJ6GCEnRAKAOOUZdBK_WY","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyY5iyOMTQCJJ3XLsp4AaABAg.AOJ0qCM6cT6AOLA_D6i4Mk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]