Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am so glad someone can be conscious of the reality of AI and come up with solu…
ytc_UgwD_wQwq…
G
People from other countries will buy A.I. production, what happens to workers in…
ytc_UgzlPXonn…
G
@serval_ssbm Yeah, that IS a key quote! The point of the video is to talk about …
ytr_Ugyrrre54…
G
AI is Bias so that is a bad idea, no transparency on it's learning, really dumb …
ytc_UgyPa4MeM…
G
I think you’re right there but i think it all depends on “evolution” self updati…
ytr_Ugz52rg38…
G
Those protests bespeak of the embedded deep ignorance of the nowadays world. AI …
ytc_Ugw7Qms8o…
G
If you you AI to rewrite a bullet point list into formal email and second party …
ytc_UgzvihpI8…
G
This host is definitely AI right? His oldest videos have a different voice entir…
ytc_UgzebyrnE…
Comment
I know Liron will read this so I find it useful to point out some things that I feel gets ignored in the debates on the subject:
1. the concept of the Singularity as blocking apparatus to attaining knowledge also applies to ASI and AI risk - there is no useful information to extract from beyond the event horizon because the complex system of ASI is computationally irreducible and things have to actually play out in reality;
2. all the valid logical arguments for existential risk are predicated on the current (highly vulnerable) biological substrate of humanity - if more philosophical and material investment was done in human augmentation, this would materials change the p-doom because by definition we would not be
a. completely separate from substrate agnostic intelligence (initially ASI implemented as digital)
b. so specialized as biological beings as to be completely vulnerable to environment equilibrium or certain info hazards like molecular systems (bio / chemical /weapons)
3. so many people frame it as a coordination problem (what decisions we make politically will change the outcome), but the true risk comes from the very low capacity of individual biological humans to process new information and adapt to change. Hansen and Yud see this clearly - it is the speed of change that separates us from the ASI "it" and creates the risk - this cannot be solved through coordination and I see no real effort put in to creating a culture of real human accelerated evolution or targeted augmentation.
It's the speed of change that creates the whole basis of existential risk, and this can only be solved by working directly on the way human existence is implemented in the material substrate. Stability in biological implementation and cultural coordination creates more risk - and this is how you get (plausible) arguments of near certain doom. IMO we're not doing ourselves any favors by not reframing the conversation towards the urgent need of human self change.
youtube
AI Governance
2025-08-24T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz71U6yLBz5XwYL5rh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzX2_r_rsgNpz_zYHl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOvtG9hU6Lv4ly_Xh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIv7vGz6fWTfRzXbN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVkZ0rq_8FMqIGaCx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyK9LJ9FXH4SAHsI0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7QodP4StKAADkh2Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw6xYBBLH8IwOI1teF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwnhibF87aJiHMvmSJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzDajMnUCw2UN4CrZ94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]