Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI can only make robots do what the operator wants if they also have enough power on board to take on the tasks given to it by that operator. None of those MIT robots can be turned lose on the battle field. Even if the drones have substantially more power than they now appear to have, It is either that or the operators will need to be very close to the battle that is going to take place. Now if they are going to make the drones One-Way-Trip models they could send them out twenty miles or to the extent of their battery energy, and then blow up the target and drone in one fell swoop. The nuances of a battle will thwart most drones if they operate on their own. With an operator or operators driving a swarm, the operators make the decisions not the drones. Humans are capricious, and all of a sudden an enemy changes its mind and wants to surrender, then a human operator can take in all they've seen and heard and make a decision as to their next action. The robot will see it as only Black and/or white, either, OR. because they don't understand human Nuance or capriciousness.
youtube AI Governance 2024-07-03T00:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyPnZ0Igz7lsZOmHjl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwTL_byiOHuGcd_je14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyY5SS8O4OxtoxVYQF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwOOzDZ37vlTbe8ypJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyrSomz9y7tlWrfM_J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgzxneRT2h8i2Ve-gOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxDodBa7dZHwADMAk94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwXjTkst7mk7DQ_2SB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw6SHxEurNOGRxfXr94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_Ugy9jJAz6MBVeQ-y-kB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}]