Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So this made me remember when YouTube put a Ai description on a video about wher…
ytc_UgwJKht_f…
G
Very interesting that he didn't want to think about how AI & Technology would be…
ytc_UgzSCnWx1…
G
>2015 Have you reserved your copy of Windows 10 yet?
Hahaha. I immediately t…
rdc_cthq57j
G
Sounds like AI caused a lot of Xtra work I love it let these greedy bastards fal…
ytr_UgyeyE6o3…
G
Then they will be forced to open an OF since AI will take a lot of comfortable o…
ytc_UgxI4QS6C…
G
No divorce settlement, no child support, no losing 50% of what you worked for, n…
ytc_UgwA_AqZf…
G
as soon as i knew about ai i knew there would be a risk of it wiping us out, as …
ytc_UgxXA3t6K…
G
If your young and using AI, then how can your brain develop when there is no fri…
ytc_Ugzq7xQCG…
Comment
The video is interesting but, as someone who is working on developing AI tools, there is a massive chasm between the AIs of today and an ASI that has the means to kill us. First of all, AIs do not "think". AIs have no idea of what a human or an AI is. All they have is a vector map of tokens/concepts that "human"/"AI" is related to.
When posed with these fictional scenarios, you gave in the video, it can be argued that AI engages with agentic misalignment simply because it is fed a ton of human data, with examples of humans exhibiting self preservation or killing others when aligned with previous human "goals" or "aims". Admittedly, this needs more research, but it's a leap of logic to claim on the back of this that AIs can "reason" and "think". I know that this is never explicitly stated, but you use language that insinuates as such.
The reason it is so important to remember that AIs cannot "think" is because, if all AIs do is adjust weights and probabilities based on human generated data, then there is a solid argument that AIs can never become more "intelligent" than humans. This is because AI will never be exposed to data generated by a superhuman intelligence (SI), so how could it possibly produce any output based on this fictional SI?
Overall, I think this topic of discussion is very worthwhile and sorely needed in a non-doomer manner. AI companies do need proper regulating, but I think the danger that AI will become superhumanly intelligent, thus potentially triggering the exponential ASI boom, is massively overblown. In my opinion, issues such as companies attempting to lay off everyone to replace them with AI, copyright laws/content theft and people giving up critical thinking skills to AI (which remember cannnot think) are far more salient than the danger of ASI, at least in the short term.
youtube
AI Governance
2025-08-26T15:2…
♥ 108
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxB-7V4zW9pint2Llx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzayTktYQLHMCwZLVp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtkYL7gALyAeBzomF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1onCVK2MzAw1C2u54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwI9p6oBenlaFOZWxJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]