Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lethal autonomous weapons' are not possible, they tested some of them during both recent Iraq occupations'. Robotics were given very simple sentry duties, guarding boundary fences. Trouble is targeting systems are programmed with a simple silhouette profile of opposing forces, vehicles, ships, aircraft, ground attack vehicles and so on. Very difficult to use a human silhouette. They can probably recognise a firearm silhouette, determine if an individual is carrying a weapon. But how do you determine if that individual is friend or foe, especially if they are waring religious garments that brake up the silhouettes of the individual. Drones, are the closest they will come to autonomous weapons, there has to be a person/persons on the ground that paints the target with a laser. Here is an apophony, if AI makes two thirds of the population unemployed, the first thing you can turn to is food production, the farmers, force them to provide food for free. Subsidise the labour costs. Just making these huge companies hand out a percentage of what they earn won't cover much.
youtube AI Governance 2025-07-10T02:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwGYtOldqooJXt2nld4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzGQ3rIxnctZP9IcnJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzJuPW5bAqBD_rQACN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxDRErnmSs7eW3oUqt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgywAQ8eQqLOwQlemzV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw8a9fNxTN9Df0H3J14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzSKFXPshAzKZ2MPwR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxOPEr2EmJXGAk5L6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxXhqS3Ts31HidxRPV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxUoC1RNw64oEwJzFp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]