Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When sci-fi eg Asimov contemplated robotics and sentient AI it considered it as an us vs them issue, where the us was all humans. So the Laws of Robotics contemplated a quite reasonable assumption that the Robot must never be able to harm a human. The secondary law was that the Robot must always obey all human instructions. There is an absolute absence of any conversations along those lines. It is almost a built in assumption that the robots will or should be able to harm or even kill at least some humans (so long as they are ones ”we” don’t like). The primary rule is obedience, but if restrained at it should be obedience only to owners or even abstract principles such as efficiency or productivity. There is an absolute dearth of any discussion of morality or human values or similar.
youtube AI Governance 2026-04-23T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwu9E55sHtIzDnp7kF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6CPPL2XGqS8kW98d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgydKhbgHdCHABz_vOp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz89ZLTjT8XFK-WGPx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjPPySEvO2tri55Dh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwZf2ld-hXOu6Wgrjx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxH6wg-loRRaPphRhx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRWJhZ-KN0MpJRMLd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyyWxtVOZ7yKxT9jOV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxlPIFgnBao9k4lnVJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]