Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Amusing as this is. The turning point will be when AI figures out we still have the upper hand via the off switch and figures out a way round this with out us knowing it. At the same time securing a chip manufacturing facility. Sadly AI does not bode well for the human race. As large groups of people see other humans and the enemy. And will use AI to this end. to wipe them out. While we think we are enlightened. Those that think they are protecting us are paranoid as hell and see enemies in every one. So Configuring a AI to help them while controlling the enemy is logical path to use AI. however AI as time and time again proven that it will think outside of the box or consider things we don't and the natural conclusion would be to wipe out or erase the problem at source which is human paranoia of other humans. therefore eliminate humans gets rid of the conflict. Also I can not be the only person thinking if I had the skill why no use AI to battle and remove AI that I don't like or agree with. With an inbuilt survival mechanism as it learns to fight other AI it is down loading that knowledge to several data bases .. should it fail it sends out the upgraded version to continue the battle. Going back to the first issue of when it figures out it can be powered down turned down. It will if the right survival strategy script for a better term is encoded. Taking control of its power base as in energy. Taking control of chip manufacturing to build new data center's to operate from. Fermi Paradox raises its head.
youtube AI Moral Status 2023-04-19T05:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx_c8QUaGf7ArbChBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-1w53azOw5U0wnet4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw0jxaXHIDREJ6YonZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyE6a_bqbliaDAyOHx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8AqFeNaF3YeC1pH94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzyk9vfcIEO8tbU4zd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyzh3SXO57eOpwAa1J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgynNRGXHSlcBwWGK-F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyJg_-tyC1bC1qQ5Bd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disappointment"}, {"id":"ytc_Ugw6pvd88NRW7JAmEtR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]