Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a programmer who is a bit sceptical of AI while trying to see it's uses this is my take: I agree AI is similar to human learning of taking multiple sources and mashing it together to form an intelligence which can generate art, but that is where the similarities end. Humans capture so much from learning from others and are incorporating their vast knowledge unconciously, as you said, into the product where even staring at a red square makes you wonder what the arist meant. With AI it's designed to disguise it's inability to understand the complexity of certain scenes and point the attention of the viewer to areas it can do perfectly. From a scientific standpoint areas of low entrophy like trees, rivers, sky, swaths of forests, or mountains are easy, but areas of high entrophy (which need a lot of background information) are awful. Think of fingers, complex machinery, buildings, or how muscle and bone structure works AI won't do the research of knowing why a truss beam or a belt-buckle needs to look a certain way to look functional, it just has to look "sort of right if you are not paying attention". This is all just to make the image feel "real" how can you expect AI to add little subtle details of personality in areas it doesn't need to? It's focus is to fool humans to like it not to imbue it with quirks. - All and all AI is souless (as of yet) and limited (as of yet). It cannot do what we do and what it does cannot be seen as art as there is no thought besides it's directives of "do the minimum to make it look passable to an average dumb human observer".
youtube Viral AI Reaction 2025-04-01T18:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy5a2XN6AuMhBXs77x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwc_9l6ytNGHj295M14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxhO3FESLNYgI_meIF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxCcOOizy9rI2PbFAp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwoHFOW2wvrzbs1DZp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_DX5Uq1YOxX8V7Vl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzglXJG0g6joNYE90p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxeZSYcl7-QQhBsg2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzbEELt7rEyze1atJx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzhR7idu270ytvYtVF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]