Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
(First of all, sorry for my english, I'm a french person and just on my way to learn english) I have a question. If there is a system which manage to find a percentage of copyright (like if the IA does something that, in the end, is just 5% copyright because to much references where used and this is a 'unique' piece) would it be less uneticall ? Like, if only the pieces with less than 5% copyright could be published and if it was not usurping the identity of someone, would it be better ? Maybe, honestly I don't know and I am curious about it. I know this kind of system exist for texts so why not for art ? And I also think that if an artist doesn't wants his art to be a part of this process, he could have the possibility to not be. For me, it is not an excuse that 'if people can steal then AI could do so'. Stealing is not something right and obviously we can't control what all people do. That would be horrible. But the fact is, AI is not a person with human rights. So if we can make it possible to stop the AI to rob art, it is not perfect but it is still better I think. Well, I hope I am understandable and clear. If you have more informations, please enlighten me. I really am curious about it an don't really know this subject. (And if I made some big mistakes in english, I am curious about it to.) Thank you for reading
youtube Viral AI Reaction 2023-05-26T17:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningcontractualist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzCN-c2856tMsQ-rE94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1hc6SgK_KAzM-v2l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFm7LCaHRVzZQnL_N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxCyf6yaqYi1-62Dqp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxqmwvaA4G_JZUSP1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyA0LjQO8VXQAzvS1F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugysn9m-lvKc7yD0LX14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgznY3hiBz4C78qqMGl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyFLNK1YOH_j1pWsRx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxickaPXMiDNB70a6N4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}]