Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Management being handled by ai is a terrible idea. A goid manager is all about i…
ytc_UgyzLdMfo…
G
AI generators hide who inspired generated images because idiots like you thinks …
ytc_UgwS9NSzE…
G
So this mat be another clue to my theory what if there is a secret developer try…
ytc_UgwxqOWZW…
G
basically he got sent to prison because he was found in possession of a pornogra…
ytc_UgxcR4GjD…
G
Part of the reason anyone thinks art is impressive is due to the artist's abilit…
ytc_UgzJ_E8fn…
G
Given that commercial "AI" tech is still in its nascency, the fact that there ar…
ytc_UgxHeQ7wt…
G
im so confuse when they compare tools in software to ai... like do you think whe…
ytc_UgyhNW8d0…
G
I believe that issues like hallucinations, errors, trouble with abstract thinkin…
ytc_Ugzc0eBBv…
Comment
Mistaking ChatGPT for a search engine is fair - if you ask it for a case, it will give you something that resembles a real case. It feels like you are searching in a database, unless you know better. It's a new technology, and people are not yet used to computers being able to generate text that looks legitimate but is completely made up.
After all, Google is not much different. You search for a case, and you expect the Google result to be that case. You don't go and verify whether the case exists in the printed book as well, you trust Google that the text is not made up. The difference is that you can't trust ChatGPT, but it's very possible to not understand that difference in 2024.
Generating false cases after being asked to provide them, on the other hand, there's no way to interpret that as a fair mistake. At that point you should suspect the cases don't exist, you can't find them, and then you ask a LLM to cover it up generating fake cases, which you then submit. That's the real problem, they should have come clean at that point and I bet the judge would be more understanding.
youtube
AI Responsibility
2024-10-25T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgywobXaVLQz0ORDzAZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLPJNMia34y80Fx3F4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybH_PSyZjlHA5U46V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxUG7dv4w2bzYcKHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRbEaz9-JUs9fUTRd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxI4hpFNY0ugOsmnCp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2gQQMPl-t6FQxth94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzrW04_3Z7euxmz6rZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzkvu62FmnYR6zYAnx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQYvcTZSrxFmJninB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"})