Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m getting advertised AI generators while watching a video thats explaining to …
ytc_UgzawFXsb…
G
The problem is going to come with AI agents. Keep an eye and ear out for AI agen…
ytc_UgyLwV9EU…
G
I don't know about you but I am excited to get extorted and blackmailed by AI…
ytc_UgxA5B7u_…
G
No I'm not afraid of being lazy, in afraid of kids being taught by an ai and me …
ytc_UgyOLMLkw…
G
Sounds a bit like a more nuanced and updated version of what Asimov proposed ove…
ytc_UgzZqjoPm…
G
So the shortened statement based on the hypothetical risk of AI was signed by 3 …
ytc_UgyCNNF24…
G
The video is well made but it leaves out some relatively obvious stuff that woul…
ytc_Ugis80PWS…
G
honestly its better to use a specialized coding tool ai rather than use general …
ytr_UgxRWwzae…
Comment
The assessment of AI risk, often conceptualized as the probability of existential catastrophe or P(doom), is not a simple calculation but a form of subjective Bayesian reasoning. This framework posits that a belief in AI's riskiness is continuously updated by new evidence from technological breakthroughs that demonstrate AI's superhuman abilities to the rapid, fluid nature of corporate and governmental responses.
So while the risk of a catastrophe is not a formal mathematical absorbing state in the manner of a Markov chain, it is an irreversible outcome that fundamentally shapes rational decision-making in the face of uncertainty. The crucial insight is that the probability is dynamic and constantly being revised by a continuous stream of new information.
The challenge of navigating AI risk is best understood as a multidimensional problem where different actors, from corporations to nations, are following distinct trajectories that could lead into some absorbing state space. The collective agency and awareness of these actors are the primary drivers of whether the global trajectory can be steered away from catastrophe.
In this complex, path-dependent system, advanced AI itself serves as a vital tool for analysis. By synthesizing vast amounts of information, identifying subtle patterns, and reasoning probabilistically about different outcomes, a large language model can help to illuminate dangerous trajectories and inform the collective action necessary to mitigate risk and secure a positive future.
youtube
AI Governance
2025-08-24T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxRMFRUam0CA2YoZ1p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuEcHujOxS0fHOwcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzcI2zdfWJ4NpTHw7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5Lc6KDtmpA0Aijnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJoahSIZUrX6W7xhx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyA65uBpwx0OPX9NdN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-K-ZN7YpJCwHbLTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxu6it-4sdAbovVIlh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwnBsozd3f7bk0wQ6J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyDhNYmMTibP_QSiP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]