Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The more I watch videos about AI, it reminds me of the game Fallout 4. The DLC Automatron, and the quest line that introduces you to the mechanist. She sends out her robots with one command. Help humanity. The player is introduced to the quest line when responding to a radio broadcast for help, from an Assaultron called Ada. Her humans are all dead and Ada is in need of help. Skip forward and you'll find lots of robots annihilating humans. What went wrong? The Robots did not have any remote context or examples of how to help humans. In a post-apocalyptic world, raiders killing settlers and innocents, people struggling to survive, the robots deduce that the best way to assist humans, is to kill them with kindness. Literally, it would be kinder to kill them. Hence robots killing ALL humans. Yes this is a game, yes there are dodgy plot points, but I still feel this is a VERY basic window into trusting non-sentient beings with the lives of all breathing and sentient beings. Even with commands, and rules, things can and will go wrong. This is coming from someone with TBI due to medical neglect, please be gentle in your responses and opinions on my quite possibly ill-educated and possibly totally ignorant and dumb input😢
youtube AI Governance 2025-10-17T00:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugynuu0CD281xpnBAQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgySOu0yYm_qmZYMrVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzM4Q_alAtw2ab6e-t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzCcaACqZNKiNXNXAx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfF_Nhnlv5FOwASQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyaY-ci_BHvV3yB9SB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwCQBPrZDs3XaRyD114AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzvs2G3_30oxIOZE794AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyZ0q2FjD-BPINYBW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw-1vMug4GDWt7igFF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]