Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ex Machina coming true. There better be a on/off switch to get to in case of agg…
ytc_Ugw0zNbHW…
G
My mom's job is basically just cutting wood to the right size, and they've tried…
ytc_UgwK7F5hl…
G
As it is at the moment, there's not much to worry about. It takes weeks of work …
ytr_UgxZsGy9E…
G
7:21 I think the main place AI beats Humans is in its ability to Flawlessly reca…
ytc_UgwNOCnr0…
G
I disagree that pay should be open to the entire company. Determining pay rates …
rdc_l2m5gcp
G
There are actually more well-known people who warn against AI. I think the poten…
rdc_jif8upk
G
And he will join a new AI company to compete with Google. Good move, old man.…
ytc_UgyTC_D6k…
G
There is no effective difference between someone using "AI" tools to create art …
rdc_jwyomod
Comment
I read the article you showed. That article is literally a fork found in the kitchen. The participants made ai wrote the essay and then the judges asked questions about the essay. How the hell are they supposed to know if they haven't read the essay?
they say in the abstract they say it took 4 months late in abstract “ The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months
but on page 23 it says “The study took place over a period of 4 months, due to the scheduling and availability of the participants.” the time period did not have any significance.
clearly you haven't read the article much because in session 2 they asked them to write an essay again guess what happened? Lmm group was able to quote. do better.
In fact on page 46 the article found out that LLM and brain group people used the same word distance. “The averaged distance showed that essays generated with the help of Search Engine showed the most distance, while the essays generated by LLM and Brain-only had about the same averaged distance” here is the quote.
Anyway, in my opinion, the article is biased and kind of empty. not good enough to cite it as a research. and the crazy part? The article was not even published. It has value but not peer reviewed. It was not enough to enter a journal. Still the article is new. Maybe it will.
btw only 18 participants were there for the 4th session which is quite significant. So for you to say it reminds the same 18 participants is not enough.
Also, they did not measure the participant's skill to write an essay.
they also only measured cognitive not other cognitive stuff. Cognitive measures vary so weaker connectivity does not mean worse cognition. they noted this too btw?
206 pages but 90ish figures of tons of analysis but no clear explanation however they are almost guaranteed somehow.
btw in abstracts there are no limitations.
They also randomly decided to do a side quest and ask them if they “own the essay” .
oh and the researcher's job? it goes likes this
Eugene Hauptman: “Eugene is a faith-centric technologist, a serial entrepreneur, angel investor, advisor, and mentor.” I took this from about.me
ye tong yuang: math and neuroscience student at wellesley
jessica: designer
Nataliya Kosmyna: ai researcher
xia hao lia: designer
iris: data scientist
pattie: media arts and science professor at mit.
ashley vivan: i cant find anything about her
what a terrible video, do better. at least actually read the artcile not just abstract
youtube
2025-10-24T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzEZxg3VsjNhx1fr6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyEM5UovISn5iIU95l4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrDUY5LVE6L12wfYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1GFvPPlFPfjGCEfh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzHEK-gw8WqSvSC3ql4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7HN_wzRokp4nJux94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqUvPCNk8ZGsI5CO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwoeyALO1fRlSWGrVx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxh3juXUez34d-iDhV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyq0VIhvyOLVmRt07x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}
]