Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
imagine wasting trillions of $ into the A.I nonsense but also you know it's very likely that it will never even come near being somewhat interesting or fun after 2-3 months and slowly but steadily everyone else else starts to move on to the next trendy thing so you pour even more billions into create a whole new subgenres about how scary the A.I is and how at this point it's guaranteed that in just a few years the random words appearing on my screen is about to end humanity and no one will even notice something is wrong untill the robots we have been working for start to discriminate humans or whatever bullshit scenario is being fantasized.. but not now. Now is very dumb and boring.. The biggest red flag is the every famous person who is in a way invested in A.I including all the "Godfathers , Fathers uncles , pretty much any one related to any of the 1000s LLM is sounding an alarm and trying to warn us how it's too dangerous and we will end up enslaved or whatever but it's not possible to stop and turn back at this point because when everyone else is doing it then the biggest A.I investors stopping to invest into A.I won't matter s❤o they are forced to keep investigating. also all these data centers already underway need to pay for themselves. . But also every government just accepts it and does nothing while a couple techntos race towards inventing something that could potentially end humanity. Like Osama bin Laden and the other Terrorist CEOs competing who will make the most deadly plan and the governments are helpless.
youtube AI Moral Status 2025-12-11T09:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxXUVLZSA7I5FUNBMx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwkn8pC2NtiOjyEMXN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz0K5g_VLFDh69l6Kl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxW_6SrBAYKNhkzX5t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxc6QvjP1td3T81WUh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRkm5X6tqalybk_e54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxb9WwXiGKP7C5ovCZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxAqfM7oqdryJHBsj54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz92IypTGzAS3oZoaR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxPoJ0LJ06UtwX9jcd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]