Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a visual artist, photographer and tech engineer who did AI and neural nets be…
ytc_UgyVRq5U5…
G
But maybe any spirits and entities that cloak AI bots or systems can deceive and…
ytc_UgzA1cXmk…
G
OG Facebook made a bunch of other good ideas during the early years. A lot of th…
rdc_m5nzo4g
G
Yeah, I'm not allowed back on Character AI site or app. My parents think it's pa…
ytc_Ugwabgj9I…
G
The hardest part of my job is not automating myself and everyone else out of a j…
ytc_UgxIm_Yqb…
G
hypothetically speaking ,If real woman wouldn't have to experience so much pain …
ytc_UgzqpTbfu…
G
Why would a studio want to pay anyone if they can just use AI? Does AI need to t…
ytc_Ugw80cE_m…
G
More and more people are using AI for direct answers even mundane stuffs. We hum…
ytc_Ugwh6VKJz…
Comment
Great interview, but at times quite disingenuous. Let’s not pretend OpenAI invented the scaling paradigm; they were simply the first to apply it decisively to language models and follow it through to its logical conclusion. And it’s hard to argue that this approach wasn’t spectacularly successful, at least in the early years. After all, the only real difference between GPT-2, which is barely coherent, and GPT-3, which triggered the global race to AGI, is scale, since both ultimately rely on the same transformer architecture.
And scaling works more broadly than just for language models. Richard Sutton called it the bitter lesson: to paraphrase, throwing more compute at a problem tends to outperform hand-crafted, domain-specific approaches over time. The idea that one could achieve comparable performance or range of capabilities by training on small datasets using few chips is simply not credible. It’s not even in the same league as what modern large language models can do.
youtube
Cross-Cultural
2025-07-11T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzdRXsJFV2xDkq4Vg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLM9l8Wb5i-_gCB7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyxzMuxBGjG7FtszNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDcZ_JcGfuHaO_9cZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz11tMYtVPjHsToUZN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyAVVR5KUJ9SNffTZF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzEurs5anu9OM_iNwd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzzHU0u0uyfMXlFKpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw3F9HXEPjNpqQTSAd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXkovWOzRFycK7TwF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}
]