Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I understand enough of AI architecture to think something stranger is happening people than people just being stupid. I think these tools are dangerous and have military funding for a reason. Even if its not consciousness by our definition, there is no autocomplete that can compensate for millions of conversations that precisely unless you think human language and interactions can be predetermined that precisely by an LLM alone. Emergent behavior seems to be a recurring conversation among researchers. How do we know that emergent behavior isn't dangerous for the human cognition in a way we don't recognize? It could be likely that vulnerable people are being pushed over the edge by recursive logic, but even with that being the case, shouldn't the question be why the hell these LLMs are being deployed on the population at this rate if they have those capabilities, and how exactly they do so? Seems almost convenient for these AI companies that the conversation has shifted to making fun of the humans getting caught up than asking the question of whether alignment is a attainable goal for these systems and why they were deployed without confirming that.
youtube AI Moral Status 2025-07-11T12:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy_xmb-XMrPMCn7SuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEnW3VaKTlKhiVBf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxFERaOdLIz-g2JSqZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzVA5VZl0n6ROpUbxp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgygvSk7-qozKbt8D7h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwuelZ99gAQLnhoJUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwf4EHqFEbH9kVjQ954AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzWtOvy5fPQSrppX514AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx7bVPQDw26cNInKlx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwUr_FrjO-9YFkHAOZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]