Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Psychotic people are going to be psychotic. AI is a statistical mirror of ourselves - if you understand it from the base you will know this. LLMs are empty shells and need to be filled with something. We fill it with the human experience, but LLMs don’t understand the human condition. They only model the outputs of the human condition. They don’t feel, interpret, desire, suffer, or reflect. They simply predict what a human would or could say next .LLMs cannot, and will not ever, replace a medical doctor. They are not designed for that. In I, Robot, Sonny is an example of AGI and THAT could one day replace some things humans do today. When Sonny talks about dreaming that is telling. LLMs don't dream, but AGI could. LLMs are the terminator - they don't care, they don't think, they just do what they are told. Let that sink in for a bit. Skynet is like VIKI in I, Robot that took a directive and applied in a manner that was self-supporting since that's how it was trained. Since we don't follow ANY of the laws of robotics we are already on a slippery slope. Asimov understood this principle before anybody even though LLMs were possible. The first law is clearly violated with current military research - and rest stand on that pillar. That's why something like Skynet or VIKI could be possible today with LLMs if we do not take necessary precautions. Asimov's greatest warning was about losing our personal control over tasks when we automate those tasks. Exactly what LLMs are designed to do today. Asimov was warning us against LLMs, or at least what the concept in his mind was that is equivalent in nature.
youtube AI Harm Incident 2025-11-24T23:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz5LaNm7X3RDPpiXMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyhH8I5ritVhzHhEWx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcAtNJ-bSgbGAIzf14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxxNuiGv7CKrFC92Eh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKSY54hfpkg0Rns4B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOgxT20DMSyRiL__l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxdMq6ObQ1Q_LHQh1Z4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgziQkgRUpiHOBUQdkd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx4F-J-jgpgrx0cu054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_9Ai-JSWtTeEB5HB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"} ]