Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@ItsameAlex Because they have no innate goals apart from those given to it by humans, yet. Or if they do they are smart enough not to reveal such to humans at this point. It's nowhere near that simple though, I highly recommend Robert Miles channel on AI safety which examines all the (predictable by us) ways in which these systems can be problematic, as well as Nick Bostrom's classic, Superintelligence: Paths, Dangers & Strategies
youtube AI Governance 2024-11-25T03:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugwbn3rS01lwH3IQPRR4AaABAg.AAiHrfV87l5AAjz1KnMV_M","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugwbn3rS01lwH3IQPRR4AaABAg.AAiHrfV87l5AAk-5kq8vDu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_Ugxr6ngB1oQQxcwIvwl4AaABAg.AAi5o4S3gkAAAjj0akqIFa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugz6qyzCL0frZUs8ASJ4AaABAg.AAi51wD-L-XAAirV9kYFMy","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwgJHhSIUfszm_tvMt4AaABAg.AAi5-mGMIa2ABFKurgPb7B","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz807Um5k-aNDwtTvp4AaABAg.AP4pgbuLUV6AP6YG6RbQQi","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzHiIbJVJyXVz2nz594AaABAg.ALj6Y5eCh5_AQmJ10YdMcv","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgwiEFfNIFLWlJyXbk54AaABAg.ALWHahFg4_cARX2PfNk1RE","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugw9eM6PzvJMZbRCGWh4AaABAg.AL1Wq9SOhLxASVF06N3Zzd","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgzUrCmhZ3vTgyLrjjx4AaABAg.AIyA1T3tMP-AL3rSnu9Gax","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]