Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Finding its funny using comedian for talking about AI Subject rather using profe…
ytc_Ugwg5AjRj…
G
I know loads of teams who could take this on and fix the AI Safety standards. I…
ytc_UgxNDly0S…
G
I don't roleplay as a fanon tiny tim son jn Poly Ai *(I refuse to use any other …
ytc_UgxJGvs9S…
G
If american self-driving cars are coming to Europe, there will be lots of crashe…
ytc_UgzXndVts…
G
Why AI is Good for School
For many, the mention of Artificial Intelligence in e…
ytc_UgySF1HB8…
G
Good point. If anyone does anything good with Ai, don't buy it, just steal it wi…
ytc_UgzCCrPB_…
G
Interesting: I just looked it up out of curiosity. That’s approximately the same…
rdc_et77njk
G
I feel like in just 2 years ai images or even videos wont be recognizable…
ytc_UgyTIHmkv…
Comment
PhD student AI researcher here. I think your point at the end about "perceived infinite value" was more on the nose than you might know.
This kind of conversation comes up a lot in AI circles, especially "AI safety" spaces. There's this notion of a "Pascal's Mugging", based on the religious argument for God's existence called "Pascal's Wager", where you are told to believe in God no matter how small you think the likelihood of his existence is. Because if he does exist and you believe, you get infinite value of eternity in heaven. If he does and you don't believe you get infinite negative value of eternity in hell. If he doesn't exist then it (supposedly) didn't matter that you believed (as if beliefs don't drive actions or have any consequences, but whatever).
So Pascal's Mugging is the same logic applied to AI. If an AI comes into existence that helps us achieve world peace and post scarcity for the ever-expanding light cone of earth originating influence, then that's infinite value. If the AI doesn't come into existence, or a bad AI does that kills us all, then the lost opportunity corresponds to negative infinite value.
So no matter how unlikely you think AI is, or how unlikely you think it would be that AI kills us all, so long as those probabilities are not zero, you should probably fund our AI research institute that is working on making sure things are going well (hence the term "mugging").
Now I don't think all (or even most) of the work that is trying to make AI more reliable, trustworthy, and aligned with human values is engaging in Pascal's muggings. But it's certainly prevalent in a worryingly large proportion.
But anyways, very good video! Thanks for putting it together
youtube
2023-02-04T14:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwE6bhxwSa9a9wz4KZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_B3n7wW95ll3phOB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy1LPvMxvSpvsYp8Il4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxaRsS0QaT6T_JjleZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwIgez83PkIMuQnMZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwj1sBpq7pmqHByQZZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzge3ZoA_DHHmf1a0t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgztYxrr6PRD7KF5cBF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyoNlOkGtDfdFXDfpV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzF-UE0oEYlz_WCB_94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]