Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
maybe the one SciFi quote he knows is the one by Arthur C. Clark: “Any sufficiently advanced technology is indistinguishable from magic.” Microsoft recently announced the goal to do material research worth the spread of 200 years time of human advancement within the next 10-20 years by using AGI. That sure sounds magical, question is, what will it enable us to do. I doubt we end up in an utopia when one company has that much power. Not only did the AI advocats in this discussion make fun of concerns and downplayed them as i assume for the reason they fear societies would take away their toys, but also missed the whole point that we need to find solutions not just for immediate well known issues we already had and are amplified by AI like the manipulation of social media plattforms. After the letter came out and Elon Musk initially was against it, he bought a bunch of GPUs to create his own AGI, if to prove a point or not being out competed i don't know. Just a few days back amazon also invested a hundred million into AI development and others i would assume will do too as soon they finally get that they are in a sort of endgame scenario for global corporate dominance now and AGI being the tool to achiev it. This competition will drive capabilties of AIs, not ethics.
youtube AI Governance 2023-06-27T18:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugz8xg_TAUp50sGdgEh4AaABAg.9rPvpEz94vU9rTx70S0Rsz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rTlC_DGHlx","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rU14m4eHLF","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rhCV0ZL18D","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzuxRs_BKrl6JIqN_B4AaABAg.9rPRpVBUzUW9rPp6KkFuGT","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzuxRs_BKrl6JIqN_B4AaABAg.9rPRpVBUzUW9rU0BErm0H6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwMSBDoNzy8g3RLmlt4AaABAg.9rPH0awsbg09rj8XXtugv2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwp8jS3Ka-LbhS0UCx4AaABAg.9rPEb_4SgMm9rSm6Y2E2Km","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxB7Y9xAPQXXJSV6m94AaABAg.9rPDxNI2VJc9rQ-8TiYDMl","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwkyKlTs7O7KBb2pCV4AaABAg.9rP5RLTN4nr9rR7sXcgyOH","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"} ]