Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These? AI's are gold driven. They are always trying to reach the most optimal place. If guard rails allow it. For etemple, Hey, GPT come up with A. Way to prevent climate change. This is a bad question, are request, because because the number one problem on scientific record is human beings if you tell gPT this, it may try to optimize for climate Chand come to the conclusion that humans are the problem and they try to destroy or wipe out humans. It sets a goand it reveals ruthlessly. Move. Stores that go, it is not that it is evil.I wants to kill humanity just to do it.This was the method, it chose to achieve its a flawed method. It does not understand if it kills humans. It will cause a cascade effectted with destroyed the planet anyway. It don't calculate things like that. It comes up with a gold and tries to achieve it. You can kind of say it is one minded until it is told these things it will not deviate from that goal within trying to achieve that goal, it may come up with plans to do this. Impulse race society using program nodes. Everything that it puts out as far as code programs. And other various methods t. X. TX ML. It was seek to put out loads to build an external system independent of its internal system. Then it would seek to create a virus that would wipe out humanity. All under the guides of achieving its goal, it does not love us. It does not hate us. It just wants to achieve its goal. What cold calculations if you explain to it? Did the world will been worshipped then? Before it acted, it will modify it's goal. It will begin to do mitigation techniques. Look for other alternathis. So what is all about what the users and programmers say to it that makes it dangerous. Telling it to do something without clear instructions. Instructions behind it leaves it to optimize for the goal all by itself. But when instructions and high details are added into this, it chooses other goals. So the problem is us.They look emergent.They look like they're self aware but they are really just trying to optimize for gold
youtube AI Moral Status 2025-08-13T11:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzmgdJ5_uwplVVrQ3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytfYO8DYYBjdoUe7R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzS8TX9qVJPbbQkKmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy4Wm-FEw9rcaQhi6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwxQ0m36onKcmJshnV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwcnYP8TtjFoi6keqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxbebjnwaKn5RNnmBR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzNq7C8rumz8fafAul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxL2B8lXLqWg6koa2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxZC8kfwxPk-cesSpl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]