Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI wail let the rich get richer and the poor will be poorer. That is what will …
ytr_Ugzc7f5C6…
G
> some rental cards cover what your car insurance does not cover.
Unfortu…
rdc_mymrkfl
G
Yes - I agree 100% that AI lacks accountability. I had enterprise ChatGPT misse…
ytc_UgzXVKeBS…
G
If it's okay to make a conscious robot, is it also okay to make a human-animal h…
ytc_UgjqN_1LJ…
G
Maybe an AI is running Google using people as a front and this guy was onto it.…
ytc_UgxlWemKi…
G
ImAgine how F’ed we will be when Reparations for AI slavery comes around… humans…
ytc_UgzcsaPku…
G
Don't listen them, they are not true artist if they use AI for draw instead usin…
ytc_UgxmRSQyg…
G
the thing is most AI "artists" don't even actually know what's going on to produ…
ytc_UgydJHcVV…
Comment
54:44 another howler from Joscha. *Every* non-trivial prediction is an out-of-distribution generalization, if using the naive Frequentist of probability. There is no alternative! Predictable within-distribution generalizations are otherwise known as tautologies. Being out of distribution is not nearly a sufficient property for a prediction being dismissed!
In this second-rate frequentist understanding of epistemology, we need to have an undisturbed line of historical occurrences of alike catastrophes before we will admit an model of them is accurate, evidenced by historical induction. That is obviously an absurd strategy for avoiding catastrophe, but it is what frequentists demand.
The bayesian epistemology is the only way to avoid catastrophe the first time around.
Returning to "out-of-distribution", that is out of the distribution of historical induction, NOT out-of-distribution for evidence-bearing hypotheses. A simple analogy: a primorial black hole has presumably never "impacted" Earth. But if we detected one in our path, do we need to wait for 3 or 4 impacts before predicting its effects?? Hell no. These are 99.999% predictable, despite never having occurred once in history. Why? Because we have a rigorous, well-evidenced set of load-bearing hypotheses about the underlying mechanics of gravitation.
Similarly, we have ample evidence for the outcomes of adversarial agents where there is a dramatic asymmetry in their power. We have near 100% certainty that our safety mechanisms are not sufficiently in place to bear the live stress test of a superintelligence. We have valid reasons for expecting non-aligned instrumental goals to emerge in optimization systems. We have near certainty that AI progress will continue at pace.
So what is sufficient to dismiss an out-of-distribution prediction? Evidence or argument that proves one of the load-bearing hypotheses of the prediction is inconsistent. Sigh
The burden of proof for safety is squarely upon the accelerationist community who implicitly require that, by incredible luck alone, no catastrophic harms occur. And clearly, catastrophic risk has to be shown to have exceptionally low probability if it is to be balanced by unnecessary gains, however profitable. The subjective reward scale for material abundance is logarithmic with wealth, not linear. Losing all your money is -100 points; doubling it is +10 points. Thus, the gains need to be near certain, and massive, to pay for the slightest risk of extinction.
And finally, being an AGI "expectationalist" to the exclusion of judgment of its suitability. A bizarre position to take, that is to abdicate the responsibility of morality altogether. You expect it to happen, yet care nothing for the unprecedented harm or good it causes? How can one not care about that, unless they have no moral compass at all?
youtube
AI Governance
2024-12-22T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyM7Xgbm37_5AM0hrt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwiWSjUvhMAVniM-nZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZ9SWxKW-pl9A86N54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyM6_hipuy7cjiAoJt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7YezauFgDUEJrRN94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw6Yk7EBtPS0yzC0E94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw79jog0zeJMBjRM7R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugydx848OT3e5b2jVoR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-1YyJ1WEmPHZqv454AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwQoPdEBfrvHfAEKrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]