Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@vallab19 We're not "definitely on the threshold of IMMORTALITY". You say "there can be hundreds of thousands of what ifs", and with our current AI safety research, most of those bad. We have no idea how to permanently control something smarter than us, or if that is dven possible (Alan Turing didn't seemto think so). So unless we get that right, then we will achieve immortality alright, just not for humanity.
youtube AI Governance 2023-05-11T05:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzmDQQc44Lj58f4mCN4AaABAg.9pXRwQCL_PU9pYLH6OcpJ_","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzJw0IzJ4H1tkh-tIh4AaABAg.9pXP3OXoQ_A9pXWJwBKLmA","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzJw0IzJ4H1tkh-tIh4AaABAg.9pXP3OXoQ_A9pXYm1vK9Wa","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgzJw0IzJ4H1tkh-tIh4AaABAg.9pXP3OXoQ_A9pZLJNlcoVo","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugx1jZ6EvdgN4_BEhK94AaABAg.9pXOTqsm72a9pXXkjq23yp","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgwmP897iusCok7sm894AaABAg.9pXOHBwQZfb9pYMmnEK_xQ","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwnOErK98mBnSq_CG14AaABAg.9pXLe5SUN8b9pddnBn4TL3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxPMS3WM6iSFpSDagR4AaABAg.9pXLHovscSd9pYPgxPRsQL","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_UgykUL4pD3StP0Vktdt4AaABAg.9pXKi_NTZXU9q-F4aDrIYw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxA4t3d2TSnwA0ujiF4AaABAg.9pXBE49xQU_9pXKR3o4naP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]