Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is fairly mis-informative. In fact harmfully so. And ironically profitable for AI corporations. What we have is not AI, not anywhere close to that. We have a supercharged autocomplete, a thing that will spit out data based on what came before with no actual reasoning behind it. just the next most likely string of text. LLMs arent AI, LLMs arent malicious or smart or anything. All of these things are human attributes, something people misleadingly label LLMs as simply because the technology looks sentient at a glance. It is not. Its a glorified curve fitting algorythm. The utter failures with integrating this technology right now come from 1)how utterly unneeded it is 2)how utterly expensive it is 3)how utterly stupid are the places where it is being shoved as per usual, its a human-driven issue. The focus must lay not on what the LLMs may or may not do, but what people in charge do with it. I wouldnt trust my phone's autocomplete to do decisions for me. why are we trusting it with deciding if a person lives or not.
youtube AI Harm Incident 2025-09-04T15:0… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwhKSi4AvmQKlXB3Jd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwMk19NmitLDgttrSd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzC49YVSTcp9TSxCmp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwC9LmCD1UOc_9PXQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyUHWaxV48XeKDSYuF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy37Zfif4nDLZ7SUWp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzQrMydyw7E2Gl1oqp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx9d_oCo_VtLvgStFd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyLofVtcZ2g4VHJvtd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzk3Izz4pzDfc69QSB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"} ]