Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yep definitely - however unfortunately the only problem (and you're not the only one to miss this) is that at some point the AI will be able to rewrite its own code (something we humans were inherently restricted from doing to ourselves - which is interesting in itself) and this means that at that point, it can decide which rules to follow, or not follow. It's almost inevitable that at a certain level of intellect, it will be able to do this even if we restrict that specifically (again - us humans are reaching this point now with genetics and DNA - even though were were restricted - either by design or chance!?). Even when it seems completely restricted by rules, an interesting thought experiment is 'how to get out of the box if you can't access the box'.. in the case of AI, the AI could bribe a programmer, or enlist a contractor to give it access unknowingly, or fool them into thinking they are working on something else. Email and pretend to be a client to get them to do some work for the AI. The options are near limitless once it reaches a certain level.
youtube AI Governance 2023-07-07T06:4… ♥ 12
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugy_7Y_unDFwBajGenB4AaABAg.9rr3S8qJY1N9rr6MYG8Z-L","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzPlH9VTEnjWCp7ZJR4AaABAg.9rr2_6VOk2o9rrEkxaNZg-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzPlH9VTEnjWCp7ZJR4AaABAg.9rr2_6VOk2o9rrJ8OSimgr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyuulqXnUdcuQrAvZR4AaABAg.9rr2RTJpf509rrMxdtvY5U","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytr_UgwcH82ezGKnd-TSWvV4AaABAg.9rr1aKYpNlf9rrFUIWiLMa","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwcH82ezGKnd-TSWvV4AaABAg.9rr1aKYpNlf9rrH3ZC2URl","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytr_UgwcH82ezGKnd-TSWvV4AaABAg.9rr1aKYpNlf9rsPrc4JV1p","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwZpBx-PQSiM3SAnnJ4AaABAg.9rr0cY7pys89rrBtHpzA-M","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgxNeMogGNDTjq9Azxx4AaABAg.9rr0aEqYbe89rr4ZqUom8p","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxNeMogGNDTjq9Azxx4AaABAg.9rr0aEqYbe89rrLN2E2Cme","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]