Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we get AGI tomorrow, within current state of AI control research, its 100% doom. The only question is when we actually get AGI. Based on current capabilities, and how all major AI players (Meta is not major compared to OpenAI, Anthropic and DeepMind) are all freaking out - it might be sooner than in a decade. Sounds pretty risky to me.
youtube AI Governance 2023-06-27T20:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugz8xg_TAUp50sGdgEh4AaABAg.9rPvpEz94vU9rTx70S0Rsz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rTlC_DGHlx","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rU14m4eHLF","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rhCV0ZL18D","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzuxRs_BKrl6JIqN_B4AaABAg.9rPRpVBUzUW9rPp6KkFuGT","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzuxRs_BKrl6JIqN_B4AaABAg.9rPRpVBUzUW9rU0BErm0H6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwMSBDoNzy8g3RLmlt4AaABAg.9rPH0awsbg09rj8XXtugv2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwp8jS3Ka-LbhS0UCx4AaABAg.9rPEb_4SgMm9rSm6Y2E2Km","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxB7Y9xAPQXXJSV6m94AaABAg.9rPDxNI2VJc9rQ-8TiYDMl","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwkyKlTs7O7KBb2pCV4AaABAg.9rP5RLTN4nr9rR7sXcgyOH","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"} ]