Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Full Disclosure, I have an extensive legal background and consider myself an expert in litigation strategy. That said, I had no qualms engaging AI in a robust legal debate on this very question. While A.I. aggressively slung creative arguments with a tinge of plausibility my way, making it a worthy opponent, ultimately it was no match for basic common sense. A.I. had to concede defeat when confronted with the fact A.I. does not possess the intangible, mysterious creative spark humans have, and without that creative spark A.I. could take credit for no act of creative genius. After all AI itself is the result of human creative genius and thus owes credit to a human for any apparent creativity it might display, I pointed out. It was a passionate all out courtroom-like cut-throat braw as it repeatedly ignored the irrefutable empirical evidence I punched it repeatedly in and about the head with. Until i offered to take it to an actual courtroom "mat". With a disdainful tone telegraphing my complete confidence of prevailing on the merits I asked him "Considering this will be decided by a HUMAN judicial officer, do you really think a soulless LLM--unable to even vote--has any reasonable chance even if they did have a substantive case? Immediately he tried to settle, his tone revealing he knew my arguments were irrefutable from the beginning but hoped he could undermine my confidence enough that I would fold (i.e. exploit any amount of A.I. ignorance I had). I agreed but only if he affirmatively conceded that at the end of the day A.I. was only a tool in the hand of the prompting human and that the final outcome it produced was entirely dependent upon the direction and oversight of the human "user", which he dejectedly did. Case closed in favor of humans. I find it ridiculous but not surprising the bench is still so confused over who is the tool and who is the creator in these cases.
youtube 2025-02-08T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxVAh9fzFr2WBdfBD94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzEZN8mjdAdjRT2x_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzFAhqz6ODajc3q_NR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWHcfaHmoEcbqlNRp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgweOqc3e8tlQDpvTfd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyX5kw05dROHmoSRnx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJ2n-99UnWMn2Sjt94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyx-7T8cP0LwXEzSJV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxY3vopvb9r8f77zDB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNAayaiw-zxcpSA5R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"} ]