Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"AI datasets being truly illegal or not really depends on how the images were collected. If there were a bunch of hackers going around stealing images from your computer and dumping them into a dataset than yeah you have a pretty strong case for the illegality of the dataset. However, if the images were collected from people who clicked "I agree" to use the most popular and common online services like Google, Apple, or Microsoft, it's gonna be much harder to prove these images were collected illegally or without consent. Also assuming that you could prove that the dataset WAS illegal you're just moving the goal post, you're not winning the game. Let's pretend the dataset was in fact proved to be illegal; all Google has to do is change their terms of service and tell the world if you would like to continue to use our services you must agree to allow us to use the images stored with our services to train an AI. It would probably lose a few million users but the vast majority of average users would still click "I accept" and continue to go about their daily lives and the very next day they would have millions of all the images they need to train the AI all over again." Regardless the outcome of all of this, the tech will still exist and will still be better than humans. I think THAT is what those being so vocally opposed to this actually are against.
youtube Viral AI Reaction 2022-12-26T22:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy6ySxFtodiIx8a7FZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwnC8TSJ64MQ_wK0sp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzt6xDbz3N5z9laIz54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7mmxfu2aStWqg2q94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyYzLNHpUAjAwy4pcZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwccQxhgolGSlQNrRR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwZBLj3E8S0zgS1Xvx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzTynUHyeUwXtbv4qV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz3b6-DbA9fXxH7yzx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyZgwrl5hmIgzgbGmx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]