Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A robotics engineer here... I don't buy this story one bit. The individual is responsible for his own death as harsh as it may sound. The robot did not malfunction or attempt to kill him. The vast majority of people in the comments, including the creator of the post, are ignorant, oblivious and lack a proper understanding of the subject. 1. The robot could not have mistaken the worker for a box. The employee must have entered the transportation process himself, disregarding safety protocols. 2. When robots operate in automatic mode, entering the work cell is strictly prohibited, let alone approaching the robot. This is a fundamental safety rule, well-known to anyone working in such an environment. Only a person with a death wish enters a running automation line with robots. 3. Robot vision systems do not function as described in the claims. These systems do not arbitrarily mistake humans for objects, nor do they deviate from their programmed paths and tolerances to pick up a person. That notion is entirely unfounded. 4. The robot did not push the worker onto the conveyor. The placement height for depositing items is fixed, meaning that if the boxes were at least as high as his body when lying down, he could not have been crushed in the way suggested. 5. A robot designed to transport boxes of vegetables would have been engineered for a specific load capacity. It is physically impossible for it to exceed its designated payload and lift an entire human being. Every detail points to the company withholding critical information and yet, the masses in the comments unquestioningly believing this utter nonsense. Even worse, mainstream news and social media continue spreading such misinformation. It is astounding how easily people are misled, many seem to consume too much fiction and too little factual knowledge. Watch less movies and read more books people! :)
youtube AI Responsibility 2025-03-02T00:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzNm0bP4CxiuGpMyvt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxZaORyeI79plyIqdV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy9hgJLDyaI5nrbZ7l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzTNdPSihfY9g_3kt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyw_SQQSDBooeZ16114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy0ASxlp9GILFEz7hR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz43KEje1QGM-dTE_94AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxPF6GBjqp9oWaT-_x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwYMXZgxSo8Fx1im_B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQn-ZxldVQ_V2Gmzx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]