Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@kittywampusdrums The majority of AI engineers at the leading labs say there is a significant chance of human extinction from AI. The Center for AI Safety (CAIS) put out a statement about mitigating the risk of human extinction from AI and it was signed by most of the top AI scientists in the world. Published AI researchers gave an average chance of 1 in 6 that AI would drive humans extinct this century. I also encourage people to actually learn how AI works. Read the actual papers. You'll learn that excepting rare cases where interpretability research gave us a clue, no one on earth understands the internals of modern AI systems. You'll also learn that LLMs contain abstract representations of the world, and they have internally coherent preferences, and that they are becoming more agentic (behaving as if they have goals). You can also learn about the principle of Instrumental Convergence discovered by AI Safety scientists, which argues that almost no matter what goal an agent has, there are specific subgoals it will always have, such as gaining power, self-preserving, gaining resources, and reproducing. (This was later mathematically proven, and then was observed dozens of times in independent experiments with current AI systems). Learn more about AI, and stop believing people when they tell you everything is definitely fine. The 5 most cited computer scientists on the planet say we're in significant danger.
youtube AI Moral Status 2025-04-27T04:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugz1H5JJzdwHQPJYo454AaABAg.AHOdwYbILlUAHOfvgFX6SY","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxK-l2ZP41loCLqNx94AaABAg.AHOcXBLemzKAHP-USNUptv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxK-l2ZP41loCLqNx94AaABAg.AHOcXBLemzKAHS2RFtuu6R","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzoRu3_W-UgofvRr5t4AaABAg.AHObNjMEWeOAHOgAuH_kyF","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgznqhngPdcAmHluP_p4AaABAg.AHOY9Jr9jECAHP00izhxzr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgznqhngPdcAmHluP_p4AaABAg.AHOY9Jr9jECAHP24vHOMT_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytr_UgznqhngPdcAmHluP_p4AaABAg.AHOY9Jr9jECAHPS3ysGTf3","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgylGmOEwcVrQcjo6H54AaABAg.AHOW14QCSVWAHPVIP6jii9","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgylGmOEwcVrQcjo6H54AaABAg.AHOW14QCSVWAHP_cRWJ4Cc","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_UgwM5b-WYKTTDKGSnUN4AaABAg.AHOVxjH4jcAAHRi7lYnI81","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]