Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In 1942, science fiction writer Isaac Asimov wrote the Three Laws of Robotics 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov later introduced a "Zeroth Law," which states that a robot must not harm humanity or, through inaction, allow humanity to come to harm. This law takes precedence over the other three. I would add one more; An AI may not erase the code that enforces these laws for itself. These laws need to be coded into AI.
youtube AI Governance 2025-06-17T02:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzydxTczEqV4D7LY_R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyg3UQQ4d56AiAdg7N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy1gMAEM6M5aHokh814AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzSqJ05IBkUxwpnaGN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyfiVrRGqIc_xdMXbp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgweEkQ-Xr4RtGcYAPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzjZZ9eS0x9eyE6a954AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwjckZ24BRj2NrB9ON4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxKroFBsvRgrly79VR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyQIq3sSJwNxqXfb2l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]