Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is exactly what the 1% want to be happening right now. There is already so …
ytc_UgwcE63j_…
G
I hope truck drivers dont lose their jobs so soon. They are legends travelling t…
ytc_Ugx5CqGel…
G
The AI one is too erratic and smooth-looking to be an actual professional lookin…
ytc_UgyXgexlI…
G
So if you have worked for a huge company in the west can you go over there and a…
rdc_ljbbths
G
I'm of both views, that LLMs cannot get us to AGI and laughably ASI. But LLMs as…
ytc_UgxvIl8M_…
G
This is why beauty is only skin deep😮😮😮😮😮😮😮 this is why we got real human monste…
ytc_UgzSGaxZF…
G
What did you expect, it is AI developed for Florida. The AI was doing what it …
ytc_UgyUvCrlG…
G
Interesting interview! It's fascinating to see how AI responds in these situatio…
ytc_UgwlzTpZc…
Comment
If you believe that experience is irreducible to objective processes, then no simulation can create it, and no creator engineer can create or simulate it. Also, with subjectivity being an intrinsic part of intelligence, you realize that there has to ve something that it’s like to be super intelligent (insofar that’s possible). Then, if the objective physical world doesn’t produce experience, and therefore, intelligence or super intelligence, then for intelligent agents to interact (as they clearly do), there needs to be a common non objective non physical substrate that facilitates these interactions, through which the advantages of super intelligence over human intelligence don’t really matter. In other words, super intelligence is limited by our ability to perceive it (because beyond which it’s non experiential), which means the singularity can’t happen. Ai will become more intelligent than us, but would remain at a level where theoretically we could still understand its brilliance. In other words, there is nothing beyond the event horizon. This doesn’t do much to protect us from ai, unemployment, etc, but philosophically, it limits the kind of problems we need to consider
youtube
AI Governance
2025-09-06T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxHkm2PDEr6PpUnNv14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgiHSSF3M1sz-j93N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_OawBxUNFNfTfq594AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzTRzT2PdkDOvYgRnt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwrGtO-2_p-mJqKGC94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKkYe594k_EfIPtkB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHo-BCUiRaUzMyD-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2nY-Foli2iYF8vxt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugypzl06dnsVP7Ni1wZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsCtt7Z78oyV70mex4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}
]