Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah people are overexaggerating. This technology has been around for a long time now, google, nearly every large platform you can think of has used machine learning to organize data, and serve consumers in some way. It's only now, that the models have become more broad and general, which in itself has a major downside, its scalability. The size required for these models, compared to a more specific application of machine learning, is astronomical. With the limitations of chip manufacture, and a number of other things, its stupid, and naive to think this so called "AI" will ever surpass the complex and spatially dense nature of the human brain. It would not only require an insane, building-sized computer to do anything anywhere near what the human mind is capable of, and the massive usage of power, and restrictions of calculation/processing speed. All of these self purported AI geniuses/ceos are just saying/doing anything they can to keep the stocks value of AI from decreasing, as its becoming a dead end. Don't get me wrong, I still think the models we come up with have use, and aren't entirely garbage, but still comes no where close to human comprehension and the energy-to-computation efficiency of our brains. As someone who has a pretty good understanding of these models, If you have a good amount of knowledge of general differential equations as well as PDE (partial differentials), you'd know that its nothing to worry about, atleast not for the next 50 years, when we can all come back to this moment and see if it still holds true.
youtube AI Responsibility 2025-07-27T14:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzkuAL2JGLQZHn2zB14AaABAg.AL6Z8z_DyT3AL86R7JBFL0","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwJ67j24jtfSJroWSN4AaABAg.AL5IwURnXBLAL737WIViGn","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwJ67j24jtfSJroWSN4AaABAg.AL5IwURnXBLAL8PwWQLSfp","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxkCr0czNEBg4Sfb4N4AaABAg.AL4kw9ZAS1ZAL4q9YIFENP","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzpRrWtS-WswnuD3dV4AaABAg.AL4iWKzP8s0AMEIqMbPm8l","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyKjMmZ3pi4vkfn0ut4AaABAg.AL2gf7xqRHeAL36d2pM76T","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgynHlwPT63gJ188I6R4AaABAg.AL06vnO5XwuAL089U6v5Ov","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyL3ptjdkV5G9T5D954AaABAg.AL-woDHo8mmAL6JaOGBCn_","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgyL3ptjdkV5G9T5D954AaABAg.AL-woDHo8mmAL6U0iwkT8F","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytr_UgwS2OyrfZz-3pOlYJ14AaABAg.AL-LESACuRWAL72NIha0L1","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]