Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In this discussion it sounds as though you are making the AI the problem - directly. I think the Godfather is leaving out a critical point. The AI is programmed by "humans" and if AI becomes conscious, or autonomous, they are still a programmed software/machine. They can be programmed for war or they can be programmed to support "human flourishing". If they are programmed for war, they are more dangerous than our standard weapons, but they are still programmed by humans. It is the humans who control the AI that are the threat. They are the ones that need regulating. But, the governments that do the regulating are the same bodies creating the AI autonomous weapons. What we need is for countries to stop invading one another, killing citizens, and come to a global "arms agreement" about the use of AI in warfare. Good luck with that - but the primary point is that AI is controlled by humans. It's the same old thing: Guns don't kill people, people kill people using guns.
youtube AI Governance 2025-07-20T20:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz-PHat6WdA82I3fll4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwShqmGArkBDKjanGZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx84QSTjZxwS1vw0Vx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxC72JUNJpzR6Qyjnx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugz8CQcIgC8fk06zBcp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwuZTd8cZqTDmFEWVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzd029lCbCTnlNBIZt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwoPyuiixH7Lb5Y9gB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyIW95_zGm9w17Rj9t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzp32OH0MlOLh7WlOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]