Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I remember a little story my physics teacher told me about 10 years ago. At the time, aside from regular physics classes, he also held an optional "supplementary" class once a week where students could come and ask for help with theory or exercises (There's an official term for these classes, but I'm not sure what it is in English). Anyway, he told me that one student would turn up every week to ask for help with calculation exercises. Apparently, instead of trying to learn the underlying physical principles and how they translate into calculation, she would try to literally memorize a large number of exercises and their solutions. For context, we had these 3 large books full of exercises that had mostly publicly available solutions, so she basically went through as many of them as she could memorizing everything. He told me that sufficiently rephrasing / changing the exercises would cause her to completely freeze, since it deviated too much from what she had memorized. At the time, both me and the teacher were really confused, as memorizing so much data seemed like much more work than just trying to learn the basics. It was also much less fun and rewarding. Ever since I've started reading about how AI transformers work, this story has always struck me as eerily familiar. The AI image generator, for example, doesn't "construct" the image in a way a human would: it doesn't really understand the fundamental elements of the image or the intent behind them. It just has a ton of "memorized" images that it tries to patchwork together as best as it can (I'm really simplifying it here. By "memorizing" I just mean that each image you feed it influences some abstract parameters that it can later use to give you the result.). There's nothing magical or alien behind it, It's an algorithm like any other, just with a fuckton of memory and processing power backing it up. I don't think it will ever be able to replace real artists, in the same way that it will never replace real scientists, or any other profession that requires ingenuity and creativity. But just to be safe, KEEP POISONING!
youtube Viral AI Reaction 2024-12-30T23:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxC2ZhsIzVKpsDjIYl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyuJ0LhT34FC_fGVXd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy_kIADgMEc62BQ9Bp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw0YJKIAX3TS0B_krF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzNedrPhzJ_m1JK9cJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyDNtE_vDPQl1N86rh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyLafQ1q_mJ6Rfcb_N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdSxKYzIKU4eSs0Ft4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwqVmENcKZJZnBeJ4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz4lhCgNkf5dhcpeZR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"} ]