Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Mistaking ChatGPT for a search engine is fair - if you ask it for a case, it will give you something that resembles a real case. It feels like you are searching in a database, unless you know better. It's a new technology, and people are not yet used to computers being able to generate text that looks legitimate but is completely made up. After all, Google is not much different. You search for a case, and you expect the Google result to be that case. You don't go and verify whether the case exists in the printed book as well, you trust Google that the text is not made up. The difference is that you can't trust ChatGPT, but it's very possible to not understand that difference in 2024. Generating false cases after being asked to provide them, on the other hand, there's no way to interpret that as a fair mistake. At that point you should suspect the cases don't exist, you can't find them, and then you ask a LLM to cover it up generating fake cases, which you then submit. That's the real problem, they should have come clean at that point and I bet the judge would be more understanding.
youtube AI Responsibility 2024-10-25T14:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgywobXaVLQz0ORDzAZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwLPJNMia34y80Fx3F4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgybH_PSyZjlHA5U46V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxUG7dv4w2bzYcKHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyRbEaz9-JUs9fUTRd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxI4hpFNY0ugOsmnCp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2gQQMPl-t6FQxth94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzrW04_3Z7euxmz6rZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzkvu62FmnYR6zYAnx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQYvcTZSrxFmJninB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"})