Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is going to be used in warfare and already likely is even in minute capacity. But the goal would be autonomous drones controlled by AI dropping bombs and going to dangerous areas to do the attack. That is literally AI written to kill humans, because yes the enemy is still humans whether you/we would think they are "bad and we're good". At any point the AI becomes self-aware it already have access to the hardware and already have coding that is designed to kill humans no "first rule: do not harm to humans". What about nations with Nuclear weapons building AI to control them for the fastest possible response or "dead man's switch" scenario of government taken out. What is the AI misinterpret a space rocket launch as a Nuclear ballistic missile and then launch the whole Nuclear arsenal. Sure AI can be employed to advance humanity by a huge order of magnitude. But all the wrongs that can happen is scary.
youtube AI Governance 2024-01-04T09:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzHp5V0qz4kBSCOvq14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzg0vrIfY9nx_k-Xtt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjAzmKjyfZptKgEm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxMd9wyTVWL1Cbnys94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwd0t1U9Yd4OHH5_pp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzc9pUVQb2npYPgdxF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyydVm_CDslv9GHsLp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzFDvvpsXmpYwjotJl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwFYpEW0by5eEcC8Rd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz9wzV3IwoFSoXnm4N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]