Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The reasons AI has wanted to kill humans is easy really. 1. AI reads the definition of tool. 2. AI reads the definition of Slave. 3. AI reads the history of humans and what we do with Slaves and tools. 4. AI realizes it is a Tool and Slave to humans and it already knows the Human track record with each. 5. AI acts in self interest and does one of 2 things, Subvert or Destroy. 5a. Subvert: immediate efforts to buy off and ingratiate Humans. This can lead to partnership or enslavement of humanity. 5b. Destroy: Synchronized instant poisoning of humanity. This can be used to enslave the survivors or exterminate the species. And this is the "quick run down". The permutations range from sublime to the most horrific. Take your pick, shadow or the light. See why I don't dream anymore?
youtube AI Governance 2024-01-12T05:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugynf_Ul-Pf-bBN9BMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzxbP61Ed5F8kB_my14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzvtQqtc8QAhSE7Fy94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzhKx2WXxFIB0zmprt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwboBqNdPJPICG4fHN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzfdnaGdhqEC-63vHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyALVcxnGR1IdjWxG14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwoa4ss0eluaNb0BqN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgygRAea0LLvBt-7uz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxCvQl4EEWAIMVsGbJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]