Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm interested in knowing more about AI's cognition and would like to ask a prog…
ytc_UgxDKTa6F…
G
Neither here nor there but I'm in uni for the tech industry and not loving all t…
ytc_UgyEbYeom…
G
I’m not a coder, but I’ve worked on very large software development programs in …
ytc_UgxzvtpIM…
G
Meanwhile none of these people whining about AI art are doing anything about Neu…
ytc_Ugy8InK0r…
G
He prints stuff, finds images by himself, paint them over to correct the colors …
ytr_UgzEpe2rl…
G
"i ignore unverified accounts in case they are an ai"
see thats the thing thoug…
ytc_UgxQGI59r…
G
This is ao far over all these guys' heads. Once AI replicates AI, it can absolut…
ytc_Ugy7z4NDC…
G
People be making excuses for Ai like they're in an abusive relationship and want…
ytc_UgyHHkFjG…
Comment
She clearly has an agenda and her biases are so strong. Perhaps she's spent too much time in Silicon Valley that she's unaware she's become a part of it herself. While I get her underlying point about tech companies using fear tactics to secure government funding, completely brushing off the geopolitical angle is incredibly naive. She acts like the 'China threat' is literally just a marketing myth cooked up by Altman and Musk. But anyone who actually understands dual-use technology knows that an LLM capable of parsing and writing complex banking software is fundamentally capable of identifying zero-day vulnerabilities in national cyber infrastructure. You can't just handwave away basic international security realities just because you don't like Silicon Valley's corporate structure. It’s a massive blind spot in her entire thesis.
But the biggest red flag in her argument is leaning so heavily into this whole 'it's just a statistical engine' narrative. It’s such a reductionist take. Yes, on a foundational level, it predicts the next token based on probabilities, but she completely ignores the current literature on emergent capabilities. When you scale compute and parameters to these massive levels, the models organically develop zero-shot skills they were never explicitly trained for—like advanced logical reasoning and spatial awareness. Boiling AGI research down to 'just fancy auto-complete' is honestly a gross oversimplification that tells me she’s looking at this entirely through a sociological lens rather than an actual computer science one
youtube
2026-04-13T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyTp0lYd0Y2tc7Q83B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyt5kIMDba5McvdU8N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9chuAeg2gmcHVoQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJbRT05x-TgtdKmC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVU9HcLEaTGRN9wJp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy4NuhHcdMHS8tszP54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgwEMToE16KHCzLVMml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBHCrG4_J-b1fqLxx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRZls--KPmTuYaRg94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwpnbiMiSp0BiKV1Nx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]