Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for sharing your perspective! It's true that wisdom encompasses more t…
ytr_UgzEQ8rXE…
G
Like Han says, robots and other A.I may have a claim of ownership over the produ…
ytc_Ugx4jvrFO…
G
you can clearly see how much soul the guy has in his eyes. Truly the most soulfu…
ytc_UgxoEuJmy…
G
Blake is a sharp dude, this type of thinking in these times is necessary for eve…
ytc_UgxZJIQxI…
G
"I'm not going to listen to your constructive criticisms because this is still a…
ytc_UgxBRy_1Z…
G
So the only AI customer support option in this case is to return the phone becau…
ytc_UgwmqCtW4…
G
Cooking..putting together elements healthy that benifit the health. Ai doesnt ha…
ytc_UgxXefqgL…
G
17:16 No matter whether one is human or an AI, one will always have RGB somewher…
ytc_Ugxsi6kVI…
Comment
What would've happened if these guys had simply admitted their mistake at the very start? Something like, "Our sincere apologies. We were using ChatGPT to help identify relevant cases and were ignorant of the fact that it might generate non-existent cases. We acknowledge that were wrong about these cases: they do not exist. We have learned an (embarrassing) lesson about how ChatGPT cannot be used to search for relevant case law and we will not repeat this mistake."
Would the judge have rolled his eyes at them but forgiven them? Would they have to step down from the case? Or would they face sanctions with such an admission of failure?
Without knowing crap about the legal system, my expectation is that they would not be punished beyond being forced to stop representing their client and refund them any fees. It just seems unthinkable that there would be any wisdom in lying about their mistake like they did.
youtube
AI Responsibility
2023-06-10T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzJQ_6XMOxAyJLMsK14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqBKpiunHzE65NPkx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlpADF0euSxJkxODl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXUBhwTYMrZ93knQ14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkchEAOCDNCIRsem94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzPpfVEOkgdBXY0t1t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw687qjI9EeJMhUER14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyk8aGxjqdE6AKjNL94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxpwMVN0BSRATxArLZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGGcieCmwscao0Puh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]