Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is based on the presumption that AI can innovate and create something new, …
ytc_UgxCn0GeW…
G
I'm fine with self-driving cars... as long as I'm not in the car... or on the ro…
ytc_Ugx1KV3EE…
G
A program can only be as good as its programmers. Any code AI or not has the pot…
ytc_UgxN8k4J7…
G
the only way humanity will do something about the AI threat is by getting lucky …
ytc_UgzTAkxxY…
G
Self tought artists are prone to fall into that, naive people is more likely to …
ytc_Ugw9nhCsY…
G
Remainding jobs will be physical. I run a furniture removal service. I dont see…
ytc_UgwDZn5dT…
G
Wouldn't ai slop prompters be more replaceable? Especially since anyone can put …
ytr_UgxtA1ptG…
G
2:15, no I’m confident that AI’s will NEVER be human and thus they will NEVER cr…
ytr_Ugzj-aQhV…
Comment
They had me right up into the Simulation argument, then the train came off the rails. Their argument is one of conflicting outcomes; live forever, but AI becoming self-aware means you can't earn an income? Well isn't that special? No one in government, irrespective of party or affiliation, is smart enough to figure out how to create a sustainable society where income is not important, and the private sector can't allow it, because how will they derive and drive wealth? Another interesting thing about Dr Yampolskiy is how he tilts toward a religious solution, or at least touches upon it in a non-destructive or detrimental way. "All we have to do to live forever is live long enough to cure the disease that is death." Or as he suggests (the religious solution), is to die from this simulation, and live forever in another attempt.
youtube
AI Governance
2026-03-30T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6O0jKnc-aFVb2cRN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylxtVI4V-sY4d2tht4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxvbMzY4drb4b8h-aR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFNh0IiPUOY-OFGgt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx4C3yTpmNYCI2Er6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJHT5RBWeqvKxss8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxusFNv7Cw79Vlnwzl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxeWRsAB03c6aEAtyd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBwcGPU4x0eRukGCp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyT00TUtBYCbBsaGdJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]