Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm only half an hour in, so maybe they get to this later, but it seems to me that the obvious weakness of AI is its lack of physical form. So I think the most logical safeguard to set up first is just making sure any AI controlled robots are on a closed network, and the robots themselves don't have the dexterity to do more tasks than they're designed to do. So then if we had a problem occur and a rogue AI got in control of some weapons platforms, the weapons platforms would eventually run out of ammo/fuel/lubrication/etc. and would break down, because we didn't design the robots with the dexterity to repair/maintain/rearm/refuel themselves. It's one of the many reasons I find projects like Neuralink concerning. I don't want a future iteration of that sort of project to be a chip that can control a human's mind that is accessible by the internet. If that mistake is made, suddenly AI could be controlling people directly, and all of the physical limitations of AI will be overcome (unless other people notice what's going on and physically stop the AI controlled person).
youtube AI Governance 2025-07-24T08:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxeClrt5U8PeiE6Gs54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyf4yKfztVx7Rexw5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxyLiSnR6UX3gguDUx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgytPL-6qLIiM4Ziwnl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx2v-NxMGnwa7Ry1Fp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxMwZYfnyLKK6rKofx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugzj_TqMdhBYyVIy4cN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyDG4FT_icHor2azMR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyjUiaQrVwJu0GVccl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzl3ucU_FnHSgaLK-Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]