Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The assessment of AI risk, often conceptualized as the probability of existential catastrophe or P(doom), is not a simple calculation but a form of subjective Bayesian reasoning. This framework posits that a belief in AI's riskiness is continuously updated by new evidence from technological breakthroughs that demonstrate AI's superhuman abilities to the rapid, fluid nature of corporate and governmental responses. So while the risk of a catastrophe is not a formal mathematical absorbing state in the manner of a Markov chain, it is an irreversible outcome that fundamentally shapes rational decision-making in the face of uncertainty. The crucial insight is that the probability is dynamic and constantly being revised by a continuous stream of new information. The challenge of navigating AI risk is best understood as a multidimensional problem where different actors, from corporations to nations, are following distinct trajectories that could lead into some absorbing state space. The collective agency and awareness of these actors are the primary drivers of whether the global trajectory can be steered away from catastrophe. In this complex, path-dependent system, advanced AI itself serves as a vital tool for analysis. By synthesizing vast amounts of information, identifying subtle patterns, and reasoning probabilistically about different outcomes, a large language model can help to illuminate dangerous trajectories and inform the collective action necessary to mitigate risk and secure a positive future.
youtube AI Governance 2025-08-24T12:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxRMFRUam0CA2YoZ1p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxuEcHujOxS0fHOwcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzcI2zdfWJ4NpTHw7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5Lc6KDtmpA0Aijnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJoahSIZUrX6W7xhx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyA65uBpwx0OPX9NdN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-K-ZN7YpJCwHbLTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxu6it-4sdAbovVIlh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwnBsozd3f7bk0wQ6J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyDhNYmMTibP_QSiP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]