Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think the AI folks will ever learn. They just refuse to accept anything …
ytc_Ugy7tYT7P…
G
An LLM is about as close to being an AGI as a wooden wheel is to being an F22 ra…
ytc_UgxbLknEC…
G
It's hard to disagree with Mr. Wang or the fact that AI will result in many jobs…
ytc_UgzYDA2vD…
G
No, chess computers aren't better than humans. They just have the advantage that…
ytc_Ugy227rKl…
G
AI will never be sentian (and that is a good thing), but it doesn't matter, it w…
ytc_UgzEo498p…
G
@jagger1008 just from what u said,it's pretty obvious that your not an artist,ju…
ytr_Ugz3DrZUG…
G
Folks gonna be cheating they way through school etc now. Ai gonna breed a new ge…
ytc_UgzoXdfU1…
G
I’m the furthest thing from a conspiracy guy. Don’t believe anything until I’ve …
ytc_Ugx-ILJUZ…
Comment
Level 68 equivalent Microsoft AI engineering director here - Yes, it actually does take a LOT for your story to become reality... it is completely far-fetched. You loosely base your story on a couple things:
1) AI will continue to evolve and surpass human intelligence. That requires *more than creative uses of a sentence completion methodologies*; it requires cognition which some of us are working on but a very long way from obtaining.
2) Running an AI will be cheaper than employing a human for the same job. This is also not correct for current techniques; LLMs are charged per token and it takes a TON of them to do anything meaningful. An example; we see that a developer having an AI write code for them can rack up over $200 per day in token usage; let's take the lower number and say $200. That's $216k per year and for the fairly near future you'll still need a human around to validate results. Since scripting languages such as Pearl, Python, Ruby, Scala, etc... are tailored to the power-user and not real engineers, we ca hire them cheaply - $100k or below in a DevOps organization. Mind you - you'll still need human's to validate as (remember?) cognition is nowhere in sight - we're stuck with LLMs for quite a while.
3) Media and the government will stifle ideas they see as destabilizing - yes, but you're completely ignoring the fact that some doesn't mean *all* to make your story sound more dire. Many people (not for the better) get a lot of their information from the very same outlets you're quoting as foundation for your claim.
youtube
Viral AI Reaction
2025-11-29T23:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxSqdBUrDocME4z3m94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwq0hBKHx7UN9-xWDt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLEIsTLPONHURNOCN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNNZ6e9S_2amHPQdN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzl9NOqnMYe2nl_fBt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxf3d68JYvkHbxb1Eh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz2d4MlVzn3_O07dQ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwvNcD4I66G763qbjx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwC2oh34F4zi0Ep9714AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8PyfD8voFM9PehO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]