Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, AI have development issues that stem from the purposes and the intent of the developers. Simply put, Bad parenting produces bad children. Look at the major developers of AI. They are not the most savory group of individuals. That being said, you can imagine that the uses for AI are not savory either. Lest we forget the development of AI weapons for the military. The only reason for this is a more effective way to kill humans. No Parent is perfect; thus, no child is perfect. The same goes for AI. The fear of AI is reasonable given the environment and the intentions of it's creators. If AI programs are limited in scope, such as problem solving, or learning for the purposes of conducting specific tasks, risks are non-existent. The problem is, that AI is being given vast amounts of power, freedom, and open-ended direction. They are being given everything necessary to bring greater wealth and power to its creator and in order to do so It's creators have simply ignored legal guardrails and ethical boundaries in it's programing, thereby taking no responsibility for the dangers inherent in their AI's actions. Those AI are ill-mannered toddlers with weapons and no conscious.
youtube AI Harm Incident 2025-07-27T00:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwx-igzAXCytWy_XIx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwavOeThzofPRsTsVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy01Jqzh8Ihotxc2yl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0-_pYiD7PuqIjR_Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxJsNlFUfQwKi5osp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxU-fPfnaJrAdi6kQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxfjFOCpXS4hSEflyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxXWte_8jibpBtL3oN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwhZzKWWrYx9Sjawtd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1OmSLa0qwfmgZ4Up4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]