Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's easy to blame the big tech leaders, saying they don't care about safety, blah blah blah. That's not it. Indeed there seems to be a concensus among tech leaders that some kind of constraint is needed. But I think they all kinda get the futility of any such attempt. The genie is already out. If Oppenheimer, and everyone on the Manhattan project decided the prospect of completion was too horrific to continue, that would not decrease the likelihood of creation of that technology within a decade. It would only change who gets the technology. Same is true now. It's kinda worse. We can say the US better win the race or an evil autocracy will, but wait, the US is currently an evil autocracy. Oops. As an experienced AI architect myself, I don't think it's all that hard to make a safe AI. The prompt "serve the betterment of humanity" would probably be good enough. The sci-fi trope of overly strict adherence to a command (like "end human suffering" -> "kill everyone") isn't the issue. The real issue is that there are hundreds of highly competent players in this race, all with different goals and different levels of constraint, most unrestrained by any law or international treaty. So the goals are mostly in the set: "help|prevent China|US achieve AI|economic|military domination and achieve perfect communism|capitalism|nationalism|Gilead"
youtube AI Governance 2025-12-11T11:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzLaYMnzbpQaXPV7g54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5z_Yg7AsBOZeZhih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcZEpd0EdO7X2z1114AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxyD_oaV2YtjV5kvap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyp3aTu5sVIOqXCoDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxgog57TM0Kv23JzU14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzp8MlnwiJrrQAPdpR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz2WKZcpiCPdDZ5_Ld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz97yKBhSVU4FsK7Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzdS6jG4aqWc9EV03t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]