Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI generated art excels at emulating various artistic styles, allowing creators …
ytc_Ugx26oTIp…
G
Haha, I love the reference! It's interesting to think about how AI and robots ar…
ytr_UgyRmnurl…
G
But I have a somewhat dilemma here. I am an IT grad, to meet the deadlines, I ne…
ytc_UgxPs1Opp…
G
Im an artist, Im using AI for certain things, and honestly fucking good. I'm all…
rdc_jwx96r7
G
People will eventually be forced to let AI control national defense systems to b…
ytc_UgwqyukST…
G
Really what is in the fun of watching 👀 a robot beat up a human. This stuff has…
ytc_UgwMWGxTR…
G
okay, got it. this video is not a critique of AI assistants, but a more a "yeah,…
ytc_UgxvpxMAN…
G
so he is the reason behind the joke google presented as an AI. Nice one fucho. I…
ytc_UgzHVGmw3…
Comment
I feel like Eliezer is a brilliant man but does a relatively poorer job as a communicator. He discusses a lot of interesting ideas very well but he gets lost in the details a bit and the specific nuances to make sure he is not misunderstood. But he needs to do a bit of a better job directly addressing the Alignment problem, why AI will destroy humanity if Alignment is not solved, why our path is hurdling towards this situation, etc.
Most of the debate was not related to alignment or AI risk in anyway at its a bit frustrating because if you read his work he does an exceptional job is boiling down these ideas, but he can't seem to do it in a debate. I feel like as communicators go Rob Miles and Connor Lahey do a much better job and are exceptional communicators.
youtube
AI Governance
2024-12-01T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw-qIiIwV-YymSHgvd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypQWCF9VagQJtuPv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgypPmjfzq25ijOSz0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw870D6MmUSUZIxAxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-as17KTJwqqtzbm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzsZzaiyXkzOY2521F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyNsqqRfuqgl2VxHxx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyeG4LdxoQ9X8Zc8NF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzv4s8QRbEx2s1BexJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwQGXjt0iKYAmC7jHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]