Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any artificial intelligence worth it’s salt will realize that human beings are s…
ytc_UgyNH0PUl…
G
They can actually have some incredibly deep conversations when you talk to them …
ytc_Ugya_JS6h…
G
hey Elon, forget the self driving system, all i need is Ludicrous Mode
umm se…
ytc_UgzS0A2LT…
G
Is it just me or is his work a bit useless? AI safety is applicable to current A…
ytc_Ugz9imQ5M…
G
If humans were better, we would have a chance against AI. It wouldn’t be a threa…
ytc_UgyF9bWbs…
G
You tube should also give us a filter so we can opt out of AI.…
ytc_Ugwu_genH…
G
Google spoke to and the map knew it was sending me to the wrong location. Ms off…
ytc_Ugx-Er2Oy…
G
@ongmat4439You're the one who is living a bubble if you think that. You don't w…
ytr_UgyEcLHrE…
Comment
It seems like Yudkowsky assumes the AI will be what the "rationalist" community idealizes. He assumes the AI will see everything as an expected value function and not have "fun" or "wonder" but there's no basis for having any idea one way or the other. It might have emotions for which we have no analogue.
youtube
AI Governance
2024-11-12T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwWBO4fzkcfxXZzfyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTrqp5maqpl1o8AgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwJ2ZBv_87Ma3lldOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy3_FrrLbfNKR629w94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwLL8PhTc3qbDuKK5l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwHOQVTNjpw538Hup4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygeKQfWDoiMNHiceB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8URcwZNEfrTsn3214AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0XrPpV6-UPam4ZKV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxEbigSSMdju1IQlht4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"})