Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is already aware, what happens is that as a child, the neural networks respon…
ytc_UgxXN22sV…
G
I called out some people actually using these ai art styles to make crappy memes…
ytc_UgziOn0Cy…
G
We appreciate your engagement! If you're intrigued by AI and its evolving capabi…
ytr_UgzH3tzV0…
G
Time to start abusing the situation by putting the people who think it's a good …
ytc_Ugzqg_aAt…
G
It is already doing that.
There's a lot of programmers that were taught a lot of…
ytr_Ugypo3e_P…
G
the only time I used ai as a reference (accidentally) I scratched that and built…
ytc_UgxbLQ924…
G
They're communicating. This is how they'll talk to each other. Like that weird…
ytc_Ugw6I46uf…
G
The argument that AI copies things is silly, as though humans don't do exactly t…
ytc_UgwN2OT65…
Comment
FT’s title truncates an insight infrequently expressed, emphasis mine:^1,2,3
>“The machines we have now, they’re not conscious,” he says. “When one person teaches another person, that is an interaction between consciousnesses.”
>Meanwhile, AI models are trained **by toggling so-called “weights” or the strength of connections between different variables in the model, in order to get a desired output**.
>“It would be a real mistake to think that when you’re teaching a child, all you are doing is adjusting the weights in a network.”
>[...]
>Chiang’s view is that large language models (or LLMs), the technology underlying chatbots such as ChatGPT and Google’s Bard, are useful mostly for producing filler text that no one necessarily wants to read or write, tasks that anthropologist David Graeber called “bullshit jobs”.
>AI-generated text is not delightful, but it could perhaps be useful in those certain areas, he concedes.
>“But the fact that LLMs are able to do some of that — that’s not exactly a resounding endorsement of their abilities,” he says. “That’s more a statement about how much bullshit we are required to generate and deal with in our daily lives.”
>Chiang outlined his thoughts in a viral essay in The New Yorker, published in February, titled “ChatGPT Is a Blurry JPEG of the Web”.
>He describes language models as blurred imitations of the text they were trained on, rearrangements of word sequences that obey the rules of grammar.
>Because the technology is reconstructing material that is slightly different to what already exists, it gives the impression of comprehension.
^1 Madhumita Murgia (2 June 2023), “Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’”, https://www.ft.com/content/c1f6d948-3dde-405f-924c-09cc0dcf8c84
^2 Ted Chiang (9 Feb. 2023), “ChatGPT Is a Blurry JPEG of the Web”, https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
^3 https://davidgraeb
reddit
AI Jobs
1685847632.0
♥ 98
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jmv0l00","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_jmvt6cl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_jmxpuhm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_jmygxz5","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"rdc_jmtiggc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]