Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You're right that people would have to give it "unregulated agency", in the form of robotic workers to maintain infrastructure, etc.. But if you think it's crazy to think people would actually do that ... well, 10 years ago, experts thought it'd be crazy to let a conversational AI interact with every last Joe Schmoe on the planet at the same time. But here were are. There was a report earlier this year, "AI 2027" that laid out a couple hypothetical paths it could follow. They make the point they can't predict the exact attacks an ASI would make, anymore than I could predict the chess moves a grandmaster could make. So it reads a bit like science fiction. But experts all over the industry were singing their praises. The YouTube channel "Species | Documenting AGI" has made half their reputation just rehashing different aspects of that same report. It's even more melodramatic than the actual paper, but it covers the bullet points if you're interested. (I personally think the paper's authors chose 2027 as an end-of-civilization date because in _their_ head it was the _earliest-possible_ year ASI could become a threat, and people in this field tend to be too bullish on timelines, IMO. And they wanted to scare policymakers straight while there was still time. But IMHO, even the people who wrote the report probably think 2030 is a more realistic year.)
youtube AI Moral Status 2025-10-31T00:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugyu6z4Pp0svDkQdioV4AaABAg.AOvWlkghdIeAOwHPKKoVXh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzuZRURQSeeS-QzHsR4AaABAg.AOvWYTnzRcKAOwAgj_NNJj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxT7RhFToA3B5KS5el4AaABAg.AOvVT1lAWuUAOvX28fpa8B","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugxm7-V2cw080X9sQZx4AaABAg.AOvVHzWnHuTAOwJ3QLWO2U","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOvlKIM-c07","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOwEBayQVOR","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOwFlVGWy1q","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOw5yAMjmCo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOw9OWiySM3","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOwB90dnOe1","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]