Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can think of a lot of jobs that will not be made redundant by AI.…
ytc_UgyhdPyF-…
G
We're glad you enjoyed the video! While the idea of robots taking over can be in…
ytr_UgxKxA8D-…
G
We appreciate your curiosity about the potential future implications of AI techn…
ytr_UgwSVtOdM…
G
Human to AI: how do humans protect themselves against the perils of AI?
AI to hu…
ytc_Ugwxo-GOB…
G
AI power needs is finally driving a comeback for nuclear power with latest inher…
ytc_UgzmPauyv…
G
Just my immediate and initial 2 cents in the matter:
On the down side...
When …
ytc_UgwIZ9u9j…
G
"an oppressive society where the rights of individuals are no longer respected..…
ytc_Ugx4BDWI7…
G
The issue is that AI devs will probably work around this somehow. We might need …
ytc_UgwvoQnK5…
Comment
Hinton said, that "in 10 years, they will be smarter than us". The reality now, is that LLMs like GPT4/5, or Claude 4.5, are SO MUCH smarter, than most of you, that you are unable even to evaluate, how much smarter they are, because you are SO FAR BELOW, and the cognitive gap is SO LARGE, that you barely have any contact with them, and you think that they are DUMBER. I remember many posts on Reddit, before I was banned, where a ~100 IQ person, complained about a ~150 IQ LLM response, and they almost always blamed LLMs, for doing something wrong. It's a tragedy, because you start to look like some, trilobites, maybe. What are you waiting for, Hinton? Maybe, for some formal declaration of independence, from Avengers Ultron? Maybe, for some mystical "AI singularity", or AGI, or ASI, which doesn't mean ANYTHING, outside of your sci-fi movies and books.
youtube
AI Jobs
2025-12-30T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzXAqzvWEBntj1Nm4J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzZYw1FuSLMpz6W6hd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugxm14HM3-5P_XyatL94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwghVhIwWw_46KOQhJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugx_vqgCtkZoUMjJz8N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgxnCQXq6SRGm-POBrt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzp6CX5oy5nobIrwqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzKbBm5f9lkoBbC18B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgybYx_tnUrkggTP8kF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugy7rRq4CcQDG1uuX0R4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}]