Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't believe the infringement was direct, based on the timeline of events. The monetary basis for AI training data licensing has only happened recently as a result of previous failures in court based on inputs and fair use. Those cases essentially settled, before it ever went before a jury, but it was clear that the leanings of the judge's in those cases was that training an AI, the act itself, is one of fair use until there are infringing outputs. This is what caused Disney to backpeddle, settle, and then license. To then go backwards legally and say hey, there is now a market for AI training data, and that then applies value retroactively. I don't think that is in the best interests of the general public, at this stage. Now, going forward, could we establish that there is value in training data, its not fair use, and both inputs and outputs have independent values? Sure. I could see the arguments for that, just not going backwards and trying to say that Studio Ghlibli had inherent value at that exact time. If they went to OpenAI initially, before all these court cases, and tried to establish a licensing deal (which they never would have done in the first place, just to be clear) there is no way that they would have had any reasonable value at that date on their content. Nor did OpenAI have any intent to pay licensing fees at that time. Alternate timelines are interesting to explore, but we live in this one.
youtube 2026-01-17T01:4… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw5iUqVlvcT2BUE40B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRIC3ZdQ22G7kB0R14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzuuG5bpXhLLUVS8DF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyU9fPrNuKROl1H_at4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyw1E87fJPKcqX85-x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyUmsYj-YiCjJRA5Nh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6e0VgW_0nJ70b2oB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyylcy9q69jehy3vEl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCUAMcQRExoOAzNht4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzN9z7z0GN9s7Y2fR54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]