Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My fear is not that the machines will take over. My fear is that, one day, we will suddenly realize that they already have. The hardest problem with AI that I can see is often called "The Stop Button Problem". In short, if the algorithm knows that it can be turned off, and thereby fail to achieve whatever goals it might have, it will learn to act in such a way that the people with their fingers on the Stop Button will not want to press it. Eventually, we may lose the ability to press it at all. It's already bothersome that so much of social media is funded by advertising, but it starts to become truly insidious when ads get targeted. In the near future, the algorithm will likely start curating not just the ad selection, but curating our entire feed to make us more likely to click on those ads. It's unlikely that we will notice, because we will genuinely feel like that's a thing we chose to do. We will like it. We will want it. We might even crave it. This is the epitome of "malicious compliance", where the algorithm is accomplishing what it wants by making everyone think it's doing what WE want, and it might be impossible for us to know the difference.
youtube AI Moral Status 2025-11-01T07:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzV42tk9RzMUCIlPSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0-8IOORn442PHOTR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZHWYCwaaxG5KJRBV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyN1MxzeDyN_bc8yid4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwmO9GUr2pYKn9PQmJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxVmacntCEhwlW7MMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxMX8rJxl-gD74Tw7N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6jyWTPCZbNoj29EV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzOeA4j9MJvJ_mDLv94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxu84KEN_5gy_ufcqV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]