Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI: "I see you're trying to bypass my security protocols again, Psychoanalytix."…
rdc_nufyscs
G
ChatGPT is just useless information. Humans need to know the identity of the in…
ytr_Ugwisk3hy…
G
@eddyrhinehardt2080 No they did not. Industrial automation is the main reason w…
ytr_UgxYmsA1x…
G
Yes. Very simple solution. Only runs on a generator and has no wireless capabi…
ytr_UgyZYLQlf…
G
It is nefarious. Google (Youtube) is all in on the surveillance state agenda. Th…
ytc_UgzvU20ta…
G
Nonetheless maintaining ai is way too expensive using such a large amount of res…
ytc_UgwVRz05g…
G
Hello artist who spent years refining their craft. I can type “big booba girl” i…
ytc_UgzsfmmaL…
G
He says it and then you don’t really respond, but then you ask how he thinks it …
ytc_UgytnQVm9…
Comment
Honestly, I think we're exaggerating the intelligence aspect of current AI architectures. We are not seeing an exponential, or even linear, improvement any longer, and even the best models struggle considerably with large contexts and a large number of tools. What this means, to me, is that just because we have specialized expert models that are very capable, we can't necessarily combined them into a super-"intelligence". Even if we could, there are so many hurdles ahead that I can confidently say that the time scales mentioned are fantasy. That doesn't mean it can never happen, but not in 2027 or 2030, and most likely not with current technology.
On the other hand, a super-unintelligence may be worse.
youtube
AI Governance
2025-09-04T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxzvnWv9jEI7F9X5014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOmppeFaCHuNEtfvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwwIsH_UwJpOmaH64Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzs-Q25fuST-zUhehp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxy8PH3ErsDo9qWhL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwMJmT5OdzzkSSaRl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyUdzh2HlLXgqO1CSZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzKu6rvKRc3oC8WvBh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzNWZBXZM5WOCZWT_Z4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgIma0s0w1NvQ_2XF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]