Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Couldn't the OpenAI team copyright the phrase"openAI is guilty" show this phrase…
ytc_UgwhiZzQQ…
G
This is what we should be fighting , not wearing masks! I don't like robots or a…
ytc_UgzgZrTN-…
G
LLMs don't just learn what's in the data. Training on data creates general algor…
ytr_Ugxvu2_TP…
G
a human deciding to title a playlist “infinite love ❤” and an AI naming it the s…
ytr_UgyZWNTpK…
G
There is no any felling if robot teach us .. human teach human is better. even t…
ytc_UgygktT0k…
G
I prefer this one:
"Keep CEOs alive by hiring AI CEOs instead. The era of AI CEO…
ytc_UgzxW54ty…
G
This is why it hurts me when I accidentally tap "AI mode" in Google 🙄😄…
ytc_UgyoHzPJo…
G
Even without emotion I could sense the slightest agitation based purely off what…
ytc_UgwgLHnfi…
Comment
Try asking it to interpret a spec and write the code for that. OP is correct that it mimics, and does so very convincingly by rapidly curating the answers to questions that have already been asked.
Your problem has not only been asked before, but is also entirely mechanical. You can algorithmically solve it without having to create anything new or actually interpret and understand descriptive material that doesn't directly say how to solve the problem.
Or even more obvious, ask it to write an LCD driver for Arduino, but completely invent the name. It will produce boilerplate that uses a SPI LCD library without even knowing, or critically, asking you about the LCD.
That last point is critical. It doesn't reason about what it may or may not know, nor does it enquire. It isn't proactive and it doesn't use feedback within an answer. It can't create it's own questions, even within the context of the question posed to it. It doesn't reason.
There was an example where somebody told it code it provided used a deprecated API, and it admitted the mistake, but all it did was confirm that by searching its dataset and producing different code using a different API. It didn't occur to it to do that in the first place.
It's impressive, but it's still a parlour trick in the way that Elisa or expert systems were back in the 80s. "Next on Computer Chronicals, we'll see how LISP and AI will replace doctors!" No.
It's a fantastic evolution in natural language processing, and a huge improvement in how we search the web, but that's all.
Ignore the media charlatans, they just need to generate headlines. If some of them feel threatened by ChatGPT, that's more a reflection on their journalism than ChatGPT.
reddit
AI Governance
1676275943.0
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_j8cfh0x","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_j8ckcvr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_j8e19vp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_j8cy5hd","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"rdc_j8dk59a","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]