Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I both agree and disagree - it might be a reflection of 'us' - but so is every individual human - they individually are a reflection of the other humans / society around them. In the same way that we still must be concerned with those individuals who can be problematic, we should be concerned the same with AI that possess the ability to 'learn'. I often see people saying it's just getting info from the internet, putting a spin on it or reforming it. What do we think human brains are doing? hint - it's a similar thing. We have inputs and lots of calculations / memory / decisions occur based on that input. No one 'feels' pain directly of others - we only feel it internally. Emotion is simply a problematic or positive stimulation internally that is derived from either fixed / known factors (like physical pain receptors) or learned - and there is no reason AI could not have the same thing (at some point not too far away). A large part of what makes us 'human' from this perspective is the fact we can 'forget' and 'prioritize' the retention of memory subconciously (we don't choose it). This allows us to act in a way that seems 'human' rather than like a 'computer' that is all knowing and unemotional - and this is often overlooked. With that in mind - any emotion AI would feel, may be inherently 'robotic' for this reason - unless the issue of memory is taken into account. For example - we forgive - because we 'get over' things - and part of that, is that we forget some of the intricate detail that annoyed us in the moment (just for instance).
youtube AI Governance 2023-07-07T05:2… ♥ 5
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugy_7Y_unDFwBajGenB4AaABAg.9rr3S8qJY1N9rr6MYG8Z-L","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzPlH9VTEnjWCp7ZJR4AaABAg.9rr2_6VOk2o9rrEkxaNZg-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzPlH9VTEnjWCp7ZJR4AaABAg.9rr2_6VOk2o9rrJ8OSimgr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyuulqXnUdcuQrAvZR4AaABAg.9rr2RTJpf509rrMxdtvY5U","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytr_UgwcH82ezGKnd-TSWvV4AaABAg.9rr1aKYpNlf9rrFUIWiLMa","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwcH82ezGKnd-TSWvV4AaABAg.9rr1aKYpNlf9rrH3ZC2URl","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytr_UgwcH82ezGKnd-TSWvV4AaABAg.9rr1aKYpNlf9rsPrc4JV1p","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwZpBx-PQSiM3SAnnJ4AaABAg.9rr0cY7pys89rrBtHpzA-M","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgxNeMogGNDTjq9Azxx4AaABAg.9rr0aEqYbe89rr4ZqUom8p","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxNeMogGNDTjq9Azxx4AaABAg.9rr0aEqYbe89rrLN2E2Cme","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]