Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well they made you test the AI, get paid for it. One way or another.…
ytc_UgyVBatEY…
G
@AinAemAet im not saying art is about how hard a piece is to make, im saying you…
ytr_UgxMMKUwt…
G
We are screwed. Man has played god and now we will pay the price. This will 💯 be…
ytc_UgzpJfJUD…
G
She wants to destroy humans?. Fyi Hollywood made movies about this s*** actually…
ytc_Ugwxkj0D2…
G
AI isn't nearly as smart as the person narrating this video portrays it to be. I…
ytc_Ugxvlp5te…
G
Shows AI saying it should destroy humans = nah don’t worry about that it’s proll…
ytc_Ugx5nkRz8…
G
We’re showing sky net how to take us out. We doing it too each other any way .A…
ytc_Ugx232oq3…
G
It sounds like you're feeling a bit skeptical about AI and its wisdom! Sophia br…
ytr_UgxkIfV-l…
Comment
Ok, as for the question of how advanced AI might kill all humans. Here's the thing I, came up with a very plausible answer to this question. I was about to type it and realized that that wouldn't be a good idea😢!
I'll tell you though, another interesting question came to mind, if super AGI was going to make decisions about life on Earth, it occurred to me that removing humans is one consideration, but would Super AI, consider removing all life from Earth, and why?
There's a hypotenuse that's SAI, might consider removing all oxygen from the atmosphere of the Earth, to remove oxidation/rust as an effect on a planet. Which in itself as a human is extremely terrifying!
It seems the best bet is to hope that we can all live together!
youtube
2024-06-16T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyY_8W2NHA3-iLHw3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwfuLTgVUoQaHVETpd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgycSLpuMuZzZmCN7p94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugypybs6otRb8oPRQV54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzZzYpW2Th7hoRYqIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugw2LjoLgsnb2Lzizz14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_Ugw-9tnkRZiyZm8hsSF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugwgz--wnqVXh4vrpB14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugx2YKBxnZGTBPpuXhl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwx2Zz34hKV4v-oNXt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}]