Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
54:44 another howler from Joscha. *Every* non-trivial prediction is an out-of-distribution generalization, if using the naive Frequentist of probability. There is no alternative! Predictable within-distribution generalizations are otherwise known as tautologies. Being out of distribution is not nearly a sufficient property for a prediction being dismissed! In this second-rate frequentist understanding of epistemology, we need to have an undisturbed line of historical occurrences of alike catastrophes before we will admit an model of them is accurate, evidenced by historical induction. That is obviously an absurd strategy for avoiding catastrophe, but it is what frequentists demand. The bayesian epistemology is the only way to avoid catastrophe the first time around. Returning to "out-of-distribution", that is out of the distribution of historical induction, NOT out-of-distribution for evidence-bearing hypotheses. A simple analogy: a primorial black hole has presumably never "impacted" Earth. But if we detected one in our path, do we need to wait for 3 or 4 impacts before predicting its effects?? Hell no. These are 99.999% predictable, despite never having occurred once in history. Why? Because we have a rigorous, well-evidenced set of load-bearing hypotheses about the underlying mechanics of gravitation. Similarly, we have ample evidence for the outcomes of adversarial agents where there is a dramatic asymmetry in their power. We have near 100% certainty that our safety mechanisms are not sufficiently in place to bear the live stress test of a superintelligence. We have valid reasons for expecting non-aligned instrumental goals to emerge in optimization systems. We have near certainty that AI progress will continue at pace. So what is sufficient to dismiss an out-of-distribution prediction? Evidence or argument that proves one of the load-bearing hypotheses of the prediction is inconsistent. Sigh The burden of proof for safety is squarely upon the accelerationist community who implicitly require that, by incredible luck alone, no catastrophic harms occur. And clearly, catastrophic risk has to be shown to have exceptionally low probability if it is to be balanced by unnecessary gains, however profitable. The subjective reward scale for material abundance is logarithmic with wealth, not linear. Losing all your money is -100 points; doubling it is +10 points. Thus, the gains need to be near certain, and massive, to pay for the slightest risk of extinction. And finally, being an AGI "expectationalist" to the exclusion of judgment of its suitability. A bizarre position to take, that is to abdicate the responsibility of morality altogether. You expect it to happen, yet care nothing for the unprecedented harm or good it causes? How can one not care about that, unless they have no moral compass at all?
youtube AI Governance 2024-12-22T15:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyM7Xgbm37_5AM0hrt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwiWSjUvhMAVniM-nZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZ9SWxKW-pl9A86N54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyM6_hipuy7cjiAoJt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7YezauFgDUEJrRN94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw6Yk7EBtPS0yzC0E94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw79jog0zeJMBjRM7R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugydx848OT3e5b2jVoR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-1YyJ1WEmPHZqv454AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwQoPdEBfrvHfAEKrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]