Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wait i just realized... if an ai was to draw random scribbles with an idea of a …
ytc_Ugw1cB5iH…
G
Ohhh wow 😂😂😂😂😂 never commented before Ur just like me 😂😂😂😂who unplugged u 😂😂😂😂 I…
ytc_UgygPeauP…
G
AI is satanic, the Book of Revelation literally warns about it. True followers o…
ytr_UgxKffCCA…
G
My mind: It's okay, click on new comments, you won't see any morons acting like …
ytr_UgwOwc43a…
G
I have taken dozens of rides in Waymos all over the covered area in LA and I can…
ytc_UgwnCJe3z…
G
the narcissism in the left wing with their wannabe art degrees is the root of th…
ytc_Ugz-8XHX0…
G
One problem I have with this. (Beyond skepticism of how fast AI actually progres…
ytc_UgyLC5BUR…
G
1. AI isn’t creative and will never be so it cannot innovate.
2. Someone has t…
ytc_UgxqOiptq…
Comment
In 2023 it would all happen in one year unless we "paused" AI. Now it is 2027 all of a sudden. Are we really buying any of this? Super intelligence is pseudoscience. AI (deep learning) models themselves improve linearly, not exponentially, and then typically show diminishing returns until someone comes with something like self-attention transformers. And then, again, you run into diminishing returns eventually. Scaling only gets you so far. This is expected on a mathematical level. There is no recursive self-improvement and there is no reason to think that it will somehow appear. There are many reasons to maintain the contrary: that it will not happen with how these models work on a fundamental level.
youtube
AI Jobs
2025-11-18T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwdcAUbnUKDgmHVft94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXYXrc3XATrUJEXjd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNv4cecQvpXZlpc4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfUHsbo9ysAeo-yYt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgNTXbYfYuKgNQEeR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDspDXpjMz33HU2HJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVXj_MsdLSD3it0Kx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzm8F22H7rsDh7w_Dp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyspw7vxtTOev8fU7B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw_AM12WPZWioauFTN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]