Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Publicly available information shouldn’t get someone sued. Even the verbatim statements of NYT get copied on google searches - they don’t sue Google, they actually pay Google to promote their own content getting stolen by Google for the previews under their links. It’s nonsensical to say that Google can “steal” a 1:1 verbatim copy of information *and get paid for it* but that an AI company can’t even sneeze in its direction. Same goes to art on the internet. Things are put on the internet with the understanding that people can “copy+paste” right into their saved pictures all the time. They aren’t posting the art “for the sake of the art,” (I.e., they aren’t posting it because it’s a beautiful piece that people could look at for an hour and then try to write some kind of reflection on it or something) more or less they are gathering a following and so they want their art to be seen and be more popular; I nevertheless understand their outrage when their art is used to train AI but like I said anyone can save their art at the click of 2 or more buttons. The AI issue has been incredibly unequally applied. Ban it outright or let it see its potential. Otherwise, don’t try to thread this impossible legal needle because it’ll be the single most arbitrary series of decisions we will ever see and there are already too many arbitrary laws that have no reflection on any sort of morality whatsoever that act solely as a gov’t ( and sometimes private company) revenue source.
youtube AI Responsibility 2026-04-11T18:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzTxTkBU_9qWAF956l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyt3NsVGWv2_R1PeV54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxkaz38EyNuxuxOUWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyS5FXtBQv_V-pJGD94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZ4et40pdS6j6uYId4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwoMRM3cifFAukQ_SJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxtUS6avwrf0-xBCIh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgyiEvy9U2o92Zww7s54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx_s3REVcDRhsVvcz54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwgg-n0Bn1pWLh9ndx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]