Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Man, that's the sadest, most delusional, and most defeating comment from that AI…
ytc_UgynR76ji…
G
ai art has no soul (proceeds to show a man looking a train instead of his date) …
ytc_UgxZYvYFu…
G
The first one said “mr bombastic side eye bombastic criminals side eye offensive…
ytc_UgzH8ytPO…
G
10:25 I agree with all of your points. Except the college essays. I cannot write…
ytc_Ugy8uH29C…
G
i am sorry but his thought is obselete, right now ai can make their own, think o…
ytc_UgxJY0Y-g…
G
He is talking about p l a n t i r along with a i. P l a n t i r has been trackin…
ytc_UgyHh9iMQ…
G
I dont like ai art bc its the only natural talent I have, I suck at everything e…
ytc_UgwHPkoY4…
G
The best proof that AI isn’t even remotely there is the gaming. Gaming still has…
ytc_UgyEhROkE…
Comment
Robert miles is so underappreciated, and the alignment problem and proper understanding of it even more so. It's really great to see more focus on it lately, especially with a major focus being on how guaranteed a bad outcome is if we continue to ignore it. This video did an amazing job on that, both in making seem as serious as it is while presenting it as the fascinating problem that it is despite how grim it is. Solving the alignment problem would essentially be the culmination of a lot of unanswered questions in philosophy and as such should be something we should be devoting a many resources towards as we can.
It cannot be stated enough that a superintelligent AI would not only need the confinement and security measures of nuclear warheads, but of intelligent nuclear warheads that will actively try to break their own containment and launch themselves, and anyone could potentially be in possession of one if they simply acquire enough computing power to run one.
youtube
AI Moral Status
2023-08-21T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytr_UgyQKVcU_TX3f-C9qC14AaABAg.9teIbM53Jx39tf6jBP5EZL","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugwl6zP57ypPqqbh5aN4AaABAg.9teGL19hLzG9tea5zQYSPe","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugzjiad5p60UKbWpfCV4AaABAg.9teDxSIXJqN9texR8QbguJ","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugzjiad5p60UKbWpfCV4AaABAg.9teDxSIXJqN9texs6PFs_y","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugzjiad5p60UKbWpfCV4AaABAg.9teDxSIXJqN9teyUjIYgnv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw8pVMDZ8MhE1Gyf8Z4AaABAg.9teD5dfBS3c9tffDFMGDbp","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgxhwYtCqF3Hu79kPcJ4AaABAg.9teBfFUsLcQ9teDXtCmbnc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugwc1ty0ImoEXOg2exZ4AaABAg.9teB1ZsU1AV9tfsWP2o892","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugwc1ty0ImoEXOg2exZ4AaABAg.9teB1ZsU1AV9tkCtQ4yYOZ","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw6xOWQvHU9u0Dpcwp4AaABAg.9te8e-LK7sU9tf7T40o_Ub","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}]