Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When a human copies art styles they on some level understand what they are doing. They can point out the colours, shapes, arangements and much more art terms I am not educated enough to know. There is actual thought behind it. When an AI does art it has no understanding on what it does, no insights, no comprehension. It follows an algorith, a complex one I give you that, but we know there is no understanding - only processing. We see that with chatgpt and its failures when it makes stuff up "hallucinates", we've recently seen it with the AI that beat a master at the game Go, when a beginner beat it with a strategy that tests fundamental aspects of the game. When something only processes a thing it cannot create an original thing, it's always a remix (like in music). The AI was trained. If you ask it to paint a trainthe AI needs to have been shown many pictures trains. If it weren't, you'ld have no way of describing a train to the AI sufficiently detailed, that it actually could draw you a train. Hence, any work that goes into training the AI is fundamentaly part of any work it produces. It's just gone through the food processor so much you can't draw a simple line any more - just like you can't draw a pig from having seen minced meat, but the pig is still there and the vegetarian knows that.
youtube AI Responsibility 2023-06-13T22:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwqkUifUKROcJyt8IZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw6j-OhdcSnVcw_MFl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFiD2nquVi-MNyeQR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxT7TJwp56WcWoQgrR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz3UyAmzU8jRgpitTl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzwutzq6iolP6Gd5sZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxhs6wzVmSWxvQ6F2d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_Ugxfh72tjmSKrsXDYM94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyeIQAPBK_VM7d8JBx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwjhnUaB75sSNU9aZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]