Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This makes no sense. How is this a win? What possible reason would there be to h…
ytc_UgwoWuoFx…
G
Ai bros continue to defy all that scientists knew about human biology
By continu…
ytc_UgwqxKQ-4…
G
He said nearly nothing the entire time. Gary Marcus said some good stuff, but I …
ytc_UgxcjR158…
G
LLM's are far away from being alien. They're mirroring the most prevalent, and p…
ytc_UgzzBj0uK…
G
It really won't matter. Studios will start creating fake actors and eventually y…
rdc_o5qcqi3
G
Who's you people? Comparing ai to photography would be more like photographing a…
ytr_UgzU3A1E1…
G
Predictive policing does not work - if it did, it would predict the coming lawsu…
ytc_Ugy2HPEWN…
G
I'm going to say this opinion as an artist. I believe that AI art isn't art in i…
ytc_UgxrtEK4e…
Comment
A week ago, a coworker showed me an opposing expert's damages report where the "damages expert" had relied upon (and cited to) ChatGPT.
I reviewed the report, and noted that ChatGPT had cited to another article as support for one paragraph. I looked, and it was a real article that was accessible on the internet. Forewarned by this video, I quickly skimmed the article and noticed that it did not support the paragraph in question. In other words, a made-up cite to a real source.
In addition, the expert's report offered *no citation of support* for the following statement: "In general, data becomes more valuable the older it is." Given that the statement was counterintuitive, I thought a citation was warranted.
It took me 15 minutes to find two glaring problems (both likely cause by ChatGPT) in an expert report filed with the courts. It's clearly not a tool that I want to rely on for my work.
youtube
AI Responsibility
2024-09-07T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwLVc7WZi9BsvZADFd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxA3IxbPRBRklhkUv94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf2HmYFOLaBx8pZql4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVB8jAMBUtBmFHJcB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3dw0mT9iNvAVDGmZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyIfOskjaM2m1okCCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwp8V5MsfJTuBOmrs54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw8kVhHE0HuQHmjNeJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZhlvjUlfv1pSaxXV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKqgAOpi9l1-P9T9F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]