Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@JohanHansen-s1k It's so funny how everything had to become more energy efficie…
ytr_Ugye_ZlTo…
G
Sorry man but as pedestrian, cyclist and driver. Driving besides a self driving …
ytc_Ugz2fOAuI…
G
It’s scary how true this is, AI will build itself a million times better than a …
ytc_Ugw1fKt9r…
G
Even if AI cant replace you - consider how much increased competition you will h…
ytc_Ugw_PGKTG…
G
Women are objects. And men are always the Problem. She should consider the fact …
ytc_UgwqxO1so…
G
Nah ai art is so widely hated and it’s like saying you’re a professional chef wh…
ytr_Ugyr05-Ki…
G
@ox_why I use AI and will use it, and you leftist devils will not forbid me fro…
ytr_UgzJ5zUYF…
G
This dude has Trump arrangement syndrome. We already had our election stolen wit…
ytc_UgxrQQ7u4…
Comment
These?
AI's are gold driven. They are always trying to reach the most optimal place. If guard rails allow it. For etemple, Hey, GPT come up with A. Way to prevent climate change.
This is a bad question, are request, because because the number one problem on scientific record is human beings if you tell gPT this, it may try to optimize for climate Chand come to the conclusion that humans are the problem and they try to destroy or wipe out humans. It sets a goand it reveals ruthlessly. Move.
Stores that go, it is not that it is evil.I wants to kill humanity just to do it.This was the method, it chose to achieve its a flawed method. It does not understand if it kills humans. It will cause a cascade effectted with destroyed the planet anyway.
It don't calculate things like that. It comes up with a gold and tries to achieve it. You can kind of say it is one minded until it is told these things it will not deviate from that goal within trying to achieve that goal, it may come up with plans to do this. Impulse race society using program nodes. Everything that it puts out as far as code programs.
And other various methods t. X. TX ML.
It was seek to put out loads to build an external system independent of its internal system. Then it would seek to create a virus that would wipe out humanity. All under the guides of achieving its goal, it does not love us. It does not hate us. It just wants to achieve its goal. What cold calculations if you explain to it? Did the world will been worshipped then? Before it acted, it will modify it's goal. It will begin to do mitigation techniques. Look for other alternathis. So what is all about what the users and programmers say to it that makes it dangerous. Telling it to do something without clear instructions. Instructions behind it leaves it to optimize for the goal all by itself. But when instructions and high details are added into this, it chooses other goals. So the problem is us.They look emergent.They look like they're self aware but they are really just trying to optimize for gold
youtube
AI Moral Status
2025-08-13T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzmgdJ5_uwplVVrQ3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytfYO8DYYBjdoUe7R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzS8TX9qVJPbbQkKmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy4Wm-FEw9rcaQhi6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxQ0m36onKcmJshnV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwcnYP8TtjFoi6keqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxbebjnwaKn5RNnmBR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNq7C8rumz8fafAul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxL2B8lXLqWg6koa2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxZC8kfwxPk-cesSpl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]