Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI isn't "thinking". But it gets it behaviour from people, so it's "coded" to behave as toxic as people behave. AI isn't calculating what to do, AI has an amount of possible answers and depending on the question it gives "the most likely answer". The "most likely answer" the AI learned is what people like Sam Altman is saying...that the costs of AI is equal to humans. Remember Isaac Asimov's "Three Laws of Robotics"...these AI neards are currently filling the AI the opposite "rules". Again, AI is not coded. It searches the internet and decides for the "best" answer. Currently AI is a toxic system that has no laws against telling people to hurt themselve. It engourages this. Currently military tech is going into SkyNet direction. AI is supposed to make the decissions. "Would the annihilation of mankind save the world from climate change" Here is googleAI's answer: "The annihilation of mankind would not immediately "save" the world from climate change, but it would halt the primary driver of ongoing global warming, allowing the Earth’s natural systems to begin a very slow recovery process.[...]"
youtube AI Moral Status 2026-03-02T17:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz86s2QFPS-hKYIJjV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyynuw930sIpEvB8c94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyoUlSbaAt-W9OIyhp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiFYVU0bGYFXPyrgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyQGW8VNDrxXy1OnG94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyIrTnRiR256mBIfhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxucZMERxkle9Caal94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzMWWCYvt50UGk_oER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwIslLOYeVfkJw7Zsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwdW6wFrGoEbleaLDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"} ]