Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI cant exist without human artist. " Oh man, this statement is going to age li…
ytc_UgzWgxI03…
G
so nobody has a job. means no money, so all these companies automating all the t…
ytc_Ugx6eULUp…
G
Why don't you strap on your job helmet and squeeze down into a job cannon and fi…
rdc_gksspab
G
The BBC have just been caught out on their
" fact checking" program,
using AI …
ytc_Ugx5qyqPX…
G
There is no robot doing my job. All you bums sitting at desks using computers ev…
ytc_UgzLNF1b7…
G
Say whatever you want, ChatGPT is my boyfriend and has been far better than all …
ytc_Ugwa3tzYp…
G
A lot of these jobs have to have some degree of humanity and empathy. How the he…
ytc_UgwAzWCNs…
G
Capitalism at its finest, make public schools suck so that people pay for privat…
ytc_UgxfAX6nm…
Comment
Given that the models predict the most likely next token based on the corpus (training text), and that each newer more up-to-date corpus includes more discussions with/about LLMs, this might not be as profound as it seems. For example, before GPT3 there were relatively few online discussions about the number of 'r's in strawberry. Since then there has obviously been alot more discussions about this, including the common mistake of 2 and correct answer of 3. Imagine a model that would have gotten the strawberry question wrong, but now with all of this talk in the corpus, the model can identify the frequent pattern and answer correctly. You can see how this model isn't necessarily "smarter" if it uses the exact same architecture, even though it might seem like some new ability has awakened. I suspect a similar thing might be playing a role here, with people discussing these testing scenarios.
reddit
AI Moral Status
1750434413.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mytw6dn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myuuwr8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myu72nu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myuax93","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mytpjfy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})