Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@chaleirafuracao2000 Honestly, I only have guesses. My entire division at a medi…
ytr_UgznwH3L_…
G
The one thing that makes it seem legitimately unconscious is that it was unable …
ytc_UgxRbTPJ2…
G
6:25 that comment really does make it clear why they want to do it, because huma…
ytc_UgzVelG03…
G
As a visual designer, I've jumped on the generative AI image trend, but it was g…
ytc_UgyISE7l_…
G
Damn. You got some deep Robot issues. This title can't just pop up to a normal h…
ytc_UgyZlp4vU…
G
AI's in life or medicine are a VERY bad idea. They do not help us. They make us…
ytc_UgxcmFB4Z…
G
More anti-china propaganda by "throwaway" reaching multiple thousands votes on t…
rdc_gx722zs
G
Elons also making a chip to go in people's brains that will directly connect the…
ytc_Ugx--zHpn…
Comment
I'm a dev, wanted to work in AI as a kid before it blew up and ended up being everything wrong a few of us nerds 10 years ago warned about.
We don't KNOW for sure how humans learn from nothing. We have no precise, definitive, universal and verified answer that we can put in a single simple sentence.
If we knew, we would have made a REAL AI or created human brains from nothing just to study them.
There are even people out there trying to replicate sort of brains to see if we could use them in computing, especially due to how fast our brain processes things.
That's why calling it "AI learning from humans" is wrong. It's not learning and it's not "artificial intelligence". It's fancy algo and a whole ton of computing power and data.
youtube
Viral AI Reaction
2025-10-20T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgykO9NhoY3DOPmGpgh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCgG2DQ8VttZ9asE54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDVPhyD7GW8_BM_yp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxc4OutHd9FUOd03ld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxD0KDwW5bS18RhxUB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaU465lFz17Ymvutx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxXTro0tkne3J4tRYt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuE4pdHuKWtzbmDNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwTKLhDZcpAbLUlCsZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAg0Aw46_zzu-OaZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]