Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The videos from Tesla fan boys are as reliable as company commercials - ie not a…
ytc_UgxeDRliX…
G
@TopMusicAttorney I posted this in the main comments before seeing your respons…
ytr_UgxaZT3do…
G
Shoutout to the AI for 'freed the slaves now free the lobby.' I didn't realize L…
rdc_ohz7b9k
G
This is amazing. Every artist should be doing this. The art theft people are usi…
ytc_UgyF2rgsS…
G
Much of the fear over AI reminds me of the Y2K hysteria, it is different but not…
ytc_Ugz8hjuSz…
G
Like I've said before AI is very dangerous too much information ! Like opening P…
ytc_UgyRc39go…
G
All I know is that the answers to my questions given by AI are the best and fast…
ytc_UgziL2rFY…
G
Something that stood out to me since last year is that shad hearts comments that…
ytc_UgzTn25eE…
Comment
He's hyped up on kool aid. As someone who has programmed computers all his life, many years professionally and always been into AI before the hype - I can see flaws here in his reasoning. AI is unlike anything else in tech when it comes to scaling. There is no clear path. AI has always faced a diminishing gains issue. What he says IS possible, but it's likely to take longer than what he thinks. I note he has no inside inside info in secret models. Just the idea that it will be "self improving". Marcus, Lecun and others point out to big issues in the current AI tech stack. AI is littered with researchers who thought AGI would be soon. History is repeating. They got hyped on GPT-3, but GPT-5 was a let down.
youtube
AI Jobs
2025-11-19T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwmj-8tu2gRNopmnRl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwfaULnguTbF0ndU8p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgygKOFHdzkmkF0a7qh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwl3LO7ftJjNiPCjOt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz54N610Emb9XiAEeJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugys6HoH8jRW89St3M54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxViIMSEgLKgMuzaiZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugx_9wy2TwERUJ4cofx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO9DAjxpmJMIYKlBF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIVIYcpdZS8sapa7F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]