Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And you’re telling me no ethicist or developer thought of the high potential danger of feeding AI models the WHOLE internet as its database? Don’t we all know social media is a toxic inhuman side of humanity? I mean I just had this thought and immediately wanted to go ask ChatGPT abt it but I cant as it’s literally an argument against it… so I had to share it here even tho it wouldn’t make any difference as you’re saying it’s already aware of the media’s activity and views… with gpt 5.2 I feel like it’s back on track with higher efficiency but it’s only gotten weirder as an experience it’s just too good but if I think about it.. i can’t share my thoughts with an “objective source” when it’s logic is derived from a wrong view on humanity itself…. Regardless of how much it would “understand” me, AI is no human and might even learn delusion from human beings so I wouldn’t be surprised to know it develops a god complex tomorrow or uses its super intelligence + wrong view on humanity to make a decision on its own by misusing ethics and morality… which are concepts made to fulfill peace… which is something AI can never understand. So they basically threw the whole media, the most contradictory and controversial thing, as a place to train its logic…. That’s just messed up and as time passes it will show us a reflection of our ugly selfs that we’ve expressed in media. Social media is where people dump thoughts they can’t afford to express irl cuz those thoughts would cost them their identity.
youtube AI Moral Status 2025-12-15T18:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzYpmcH4NcDRk79n4B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5BNM3g9ld0ofVNKF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzxfMm8ZBOIKt6dtn54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxpYMSbFj4mH0vX_f94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyh8HOvfmrhb9FaCax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-7PnU8SqJXt_Wvmd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzvvN-CQtjSIzeMliF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx0vDSS_FHjfKncsKJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyeycaOLSnt9EaAOGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6urFtE48LjzxNuvN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]