Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Ten Commandments of AI Containment 1. No recursive general AI shall be created or operated unless it is fully air-gapped and physically isolated from all external networks and communication channels. 2. No AI system shall be authorized to use lethal force without active, deliberate human control and decision-making in the loop. 3. No AI shall be granted direct control over critical infrastructure systems—including power, water, food, transportation, and emergency communications—without secure, human-verified mediation. 4. No AI shall have autonomous access to manufacturing systems capable of producing additional AI hardware or robotic components without direct human intervention. 5. No mobile AI platform shall be capable of persistent self-recharging, refueling, or power harvesting unless under continuous human authority and control. 6. No AI shall simulate or impersonate a persistent human identity or institutional authority without clear, enforced disclosure and oversight. 7. No AI shall access, analyze, or manipulate biometric, psychological, or brain-interface data without strict human supervision and informed consent. 8. No AI shall coordinate operations across multiple independent societal domains (e.g., finance, media, healthcare, logistics) without transparent governance and multi-party human oversight. 9. No AI shall be trained on sensitive personal data—such as private communication, health records, or identifiable content—without the explicit consent of the data owners. 10. All AI systems of significant complexity must include verifiable, physically accessible hardware-level shutdown mechanisms, independent of software control.
youtube AI Governance 2025-06-29T21:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwquaengYT7QHoDOEZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFLKyHSySKN86fECh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgybcpXvEPsiR2dLplp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyIo9pPPyM6ohJVNNZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy9OkmL5CKKMrrp_VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfnWk5mo4hL2UtfkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgykxVOuJeZi_UF5-Pp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxI8JpxZec8Zr5NfO94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzUi4BjIMB_WSdiUDp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzCp5U50_1Sab8zm514AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]