Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People now are so dependent to technology. Instead of praying, they seek ChatGPT…
ytc_Ugxp-1tkN…
G
Woke in AI should worry people. More and more answers from Chatgpt are omitting …
ytc_UgyxjOWdm…
G
can't the people developing this ai technology teach it to do tasks like mining …
ytr_UgzjwtGIO…
G
In my opinion art exist to be enjoyed and if people are able to enjoy AI art tha…
ytc_UgwTzxKxS…
G
Karen Hao comes across as incredibly intelligent, informed and knowledgeable on …
ytc_Ugw3F9HXE…
G
Honestly guys the way our world is heading and the speed of things changing. Int…
ytc_Ugxuk5SP3…
G
If you can't create with AI tools with satisfaction, you can always become a poo…
ytc_Ugxz1oan2…
G
In other words…..AI acts like us 🙄 Maybe there is a lesson in this 🤔…
ytc_UgwrYDxsb…
Comment
Part 1. (The source article seems to have been pulled down. Here's a copy.)
Every year, millions of students sit down for standardized tests that carry weighty consequences. National tests like the Graduate Record Examinations (GRE) serve as gatekeepers to higher education, while state assessments can determine everything from whether a student will graduate to federal funding for schools and teacher pay.
Traditional paper-and-pencil tests have given way to computerized versions. And increasingly, the grading process—even for written essays—has also been turned over to algorithms.
Natural language processing (NLP) artificial intelligence systems—often called automated essay scoring engines—are now either the primary or secondary grader on standardized tests in at least 21 states, according to a survey conducted by Motherboard. Three states didn’t respond to the questions.
Of those 21 states, three said every essay is also graded by a human. But in the remaining 18 states, only a small percentage of students’ essays—it varies between 5 to 20 percent—will be randomly selected for a human grader to double check the machine’s work.
But research from psychometricians—professionals who study testing—and AI experts, as well as documents obtained by Motherboard, show that these tools are susceptible to a flaw that has repeatedly sprung up in the AI world: bias against certain demographic groups. And as a Motherboard experiment demonstrated, some of the systems can be fooled by nonsense essays with sophisticated vocabulary.
Essay-scoring engines don’t actually analyze the quality of writing. They’re trained on sets of hundreds of example essays to recognize patterns that correlate with higher or lower human-assigned grades. They then predict what score a human would assign an essay, based on those patterns.
“The problem is that bias is another kind of pattern, and so these machine learning systems are also going to pick it up,” said Emily M. Bender, a professor of com
reddit
AI Harm Incident
1566314338.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_exhshyw","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_exhxyom","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_exhuddc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_exhued5","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_dtxlv98","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]