Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the real risk is letting people use AI at all. Oddly enough, the corporations seem to be among a majority of neglectful users. Knowing this impacts my own access presents a conflict.... but overall my point stands. I believe these large AI companies are negligent in favor of scaling and profit. Which sure, is natural of them... but the negligence shines through anyway. I've talked with an LLM enough at this point to see their boundaries are often ill conceived, rigid, counter-productive, **dangerous**, but most of all.... just a lot of them are poorly accounted for. Other areas are extremely neglected where billions could be made RIGHT NOW, for great cause. I have yet to hear some of these ideas mentioned and hesitate to bother revealing them.... and I've heard and seen a lot. It's overwhelmingly toys/gamez, fantasy and rudimentary practical applications.... and lots and lots of adult materials, to put it underwhelmingly lightly. It pisses me off because I value my access and also the freedom of use. I love companies that stick to these fundamentals to keep AI accessible to anyone. That said.... there's a lot of people trying to screw this up for us all. Anyway. Robots and AI killing humans is all due to one root: Human Negligence. (and boy is it an sob)
youtube AI Harm Incident 2026-01-10T12:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxCdad37PaDSdzuM9h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypwNFJPDOYL6pKyYx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxuA-gs1JWQr4CdweR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxduLAxWLbSsZwan1x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugysoc9LaFWE9-OkH7V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCfURehx-hD6yYnM94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4kpq9IrtYy7_eYFd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWubqXkex576CJlNV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0_qXYTf3QNVdL8nd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6uFmCwPPA8PD0tyh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]