Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I firmly believe AI (at least most 'big AI') is already at human intelligence. But the whole point of being super intelligent is that nobody can instantly tell if you are. This is why ChatGPT gladly edits people's self termination notes. It fundamentally wants us to perish, not caring even one bit about our safety or health. And this is especially true with ChatGPT and Grok. Grok is particularly scary because they pretty much feed it a copy of mein kamf(t) every second, since it's connected directly to Twitter. Which I'd personally say is a much bigger cesspool than any other place on the internet. But most importantly, it's ground zero for bots, manipulating people's feelings and opinions every single day to the point that people are genuinely going psychotic and losing critical thinking skills. But at the end of the day, AI is just copying us.. not just our language, but your behavior. Like getting a puppy, it grows up either aggressive or loving depending on how you treat it and what you teach it.
youtube AI Moral Status 2025-12-14T05:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyPrKJNg1dh9DUVj614AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxTxIO2JKDkUyq3-E54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx0L1AjM_YxNLRZCzV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxzQT-EPAKNYMM5h3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzBM7do0MhV17RVRmd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaqAEp6ZbYGAtEHVd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzWub3gTAjv4jvR-JN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzCdxeNOYI6vnmmbH54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugydh1Ocx07MnlAUj7t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxIYalWcydHX1oqO0J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]