Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Conald's executive order to prevent states from creating AI laws is toothless. E…
ytc_UgzLXmrWq…
G
I was half expecting the robot to have pinpoint accuracy like Deadshot in Suicid…
ytc_Ugy3FqxDx…
G
This is people getting worked up over nothing. This was a proper use of AI…
ytc_UgzsbbDAt…
G
Logan's Run, Soylent Green, Animal Farm just to name a few show us a dystopian f…
ytc_Ugz5IYBRf…
G
The US government's priorities are for corporate profits in the near future, not…
rdc_essaljn
G
I love the "you not supposed to let the full self driving, self drive...der" com…
ytc_Ugy8UKcC3…
G
There's a reason that Google always tests their cars in the same neighborhood, w…
rdc_d1klttj
G
But go beyond that first stage where we lose our jobs - what happens to the capi…
ytc_Ugw7WdCHr…
Comment
I have 4 possible scenarios of how the human race will end with AI, I'll list them from worst to best:
1- We will end up like Skynet from The Terminator. AI will eventually eradicate humanity.
2- We will end up like the Matrix. Humans will be used as batteries by the machines they created so we can live fake, perfect lives while slowly dying without ever knowing that our lives are fake.
3- We will end up like wallE. We will end up becoming totally dependent on AI that our lives cannot function without it. from working to sleeping to eating.
4- We will end up like Star Trek. AI and machines in service of humans, not the other way around. Technology will have evolved so much that it has eradicated poverty, famine, wars, the need for jobs, and has made people live like kings and queens. AI will do all the boring work while humans are left to do their hobbies and explore, and go on adventures.
I really want the Star Trek ending though. But knowing how greedy and evil this world is, I highly doubt it. The best case scenario is we end up like wallE.
youtube
2025-06-04T13:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxq7JSuYgibsD9iwel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOLsNy1dtpDMrjGnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxAsw6hLpLqZ4SBI4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTqB9jfOoQhP8K2_Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6N4LOH4H5rUJeg2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-1qU7tPGSmIEZOGJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMIso8ZHZCWOkZ3uJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzrHz10Wvm7Ee57Dt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHup-PPMiZJCHUh_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY3KC6ezl7rGcjToF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]