Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What would've happened if these guys had simply admitted their mistake at the very start? Something like, "Our sincere apologies. We were using ChatGPT to help identify relevant cases and were ignorant of the fact that it might generate non-existent cases. We acknowledge that were wrong about these cases: they do not exist. We have learned an (embarrassing) lesson about how ChatGPT cannot be used to search for relevant case law and we will not repeat this mistake." Would the judge have rolled his eyes at them but forgiven them? Would they have to step down from the case? Or would they face sanctions with such an admission of failure? Without knowing crap about the legal system, my expectation is that they would not be punished beyond being forced to stop representing their client and refund them any fees. It just seems unthinkable that there would be any wisdom in lying about their mistake like they did.
youtube AI Responsibility 2023-06-10T17:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgzJQ_6XMOxAyJLMsK14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyqBKpiunHzE65NPkx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlpADF0euSxJkxODl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyXUBhwTYMrZ93knQ14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzkchEAOCDNCIRsem94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzPpfVEOkgdBXY0t1t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw687qjI9EeJMhUER14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyk8aGxjqdE6AKjNL94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxpwMVN0BSRATxArLZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGGcieCmwscao0Puh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]