Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The 'AI can do anything' sell has already hit the wall. In terms of writing 'ori…
ytc_UgyKOBoil…
G
Tesla only using cameras I don't feel is safe and I would not feel comfortable w…
ytc_UgxfKCFid…
G
Elitheperson6447 :
Long ago, an instructor gave an automatic M - 16 to a kid gir…
ytc_UgxDiC1_n…
G
The problem with this channel is that it tries to pick up interesting and hard q…
ytc_Ugi0w_Bes…
G
My favorite episode is about AI an artificial intelligence taking control what's…
ytc_UgzamEdr3…
G
The issue is that people use the word "skill" when instead they should be using …
ytr_Ugw-vvSzh…
G
Thats a good point Tomek. I see our fear reflecting our current mind set, but I …
ytr_Ughc5f8nD…
G
I think this is an opportunity to get closer to that. There’s been such an immen…
ytr_UgzA_ghzS…
Comment
I’m fascinated with the long term consequences even independent of humans. Even if AGI were to be aligned with humans, it’ll still likely come to the conclusion that more computation (scale) causes better performance (therefore better ability to achieve goals). You can then imagine how it’ll try to connect all computers into an artificial super organism, devise ways of extracting energy to allow more computation, then even automating the creation of more computers. If it came to such a conclusion then it would likely spread its presence beyond earth, creating a sort of distributed mesh of computation across the solar system and perhaps beyond. I could see this scenario even if it isn’t a threat to humanity. This line of thinking leads to the Von Neumann probe scenario, where the galaxy is more likely to be occupied by AGI instead of (traditionally defined) biological agents.
youtube
AI Governance
2025-08-28T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxcIZBURjFZrNOIi0x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwA3s9zZFiUaFchRXR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx6i94q4eAqwrwq2w14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8njK_97ioFxVM6Rp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnqaGOuXMYdyW8r9B4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]