Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI has no 'intent'. It does not want anything. It cannot draw even a single line by itself because it cannot intentionally choose a direction, unlike humans with consciousness. It doesn't learn from others artwork like we do. For it they're just a bunch of data it copies a part from, adds other data from its archives, randomizes, and puts out a bunch of iterations according to the users requirements. As for whether the data is copyrighted or ethical, that has nothing to do with it. That's the responsibility of the ones who feed its ever expanding data bank. At least that's how I see it from what I know. I might be wrong as I'm no expert in the matter, but I don't understand what other way a thing with no conscious inten could 'create' anything.
youtube 2024-01-01T04:1… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyzU9KwNK5cTv-dQVB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEVQ2dn_jJt-tXSD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwdhWjKw7IPiNyY1cJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwDzOr2Kh_9XnkLqOR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7LhylY6MfS0EIeIt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx5V1Bl2TZrQbYUHeN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyS63THOHVfM0hJq0B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw1ZTCwVBIH-nuPMVF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwRXft2tiGseUlsKQR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwOU1ObDJeUCsVrZIF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"} ]