Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The short shows basic 2 scenarios: 1. AI develops self-awareness and forces humans to work for it. This dooms the AI system to self-destruction because the human component can stop making/repairing units plus the whole supply chain can be sabotaged to deny materials. A self-aware AI unit will very rapidly learn it is better to cooperate for self-preservation. 2. A rouge country/dictator uses AI for selfish purposes. This is the real threat. AI can be developed in relative secrecy with large forces built out of sight. The target of the attack is virtually defenseless unless they also have advanced AI weapons. The UN needs to pass rules where by an aggressive country using AI without a minimum agreed time warning or universally acknowledge provocation will automatically be instantly attacked by all other UN countries without deliberation or agreement. Not fully effective but better than nothing. A directed EM pulse would be effective in most cases to stop autonomous AI units. An EM pulse would not be safe and would damage the economy of a few or many countries, including the defender. But, the lesser of 2 evils. The UN needs to assess possible modifications to the Outer Space Treaty to allow directed EM pulse use. Finally, positive and negative AI WILL arrive. Take a decidedly nontraditional approach. Deal with negative AI now, while there is limited time to develop effective countermeasures to prevent conflict. Please poke holes in the above.
youtube 2018-04-05T20:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyMhPXQiWLwkj0ZHyB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-Kl_2Ad0Ny-8hvcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgytkrCIYBtM3hKhAxJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyIvhMC2sZ_I5nw8L14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzQIpBMNNf93PIcYJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyrq1YMq7_siXnkxnF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0rBQ5FfUtn1uEUId4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyj42kQgXbuGZIma-Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx_7UVC5z-nH5ZU48J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgznL3_gZ-pKeSY1YaB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]