Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Whats never mentioned is that humans - unlike AGI - have a huge range of innate, 'biologically' programmed instinctual passions driving their barely intelligent thoughts, intents and decisions. So unless we can upload those passions to ASI it will not be able to 'want' (desire) anything or 'decide' (with mental foresight) to 'wait' as though it has a concept of time, before irradicating humanity. What would drive its passional intent? Given we cant even offload our own emotions and mental delusions let alone upload them to AI ...yet. Our emotional malicious and benevolent patterns are related to neurally transmitted sensate data. E.g. fear of bio pain, love of bio pleasure, hunger etc etc. So before antropomorphising AI, perhaps describe first what would/could drive its feeling driven wants and intentions that might enable it to decide to 'wait' and freely choose when to destroy humanity? Okay so all would take is psychopathic humanity - all thinking the same idiocy - to program it into behaving mass destructively, no passionate equipment required? 🤔
youtube 2025-09-05T12:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwprATfFV36HDtMryd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQQD1DH02Ch4ywd5F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyvSfnbJpdRu6ptCHR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1ZTEOhLM3wtuZjAB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxkGC0CE_7Lt4DWmxR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxGDwPcMoRiQbeUhAd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzQ9Db389WW2yzzBCF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgymQt_83X-2JdfliQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5b_1ODkaHnfvmbMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwIxh9EARldj4G_Aep4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]