Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Our doctors are now so retarded without AI to the point they literally do get th…
ytc_Ugzs98umf…
G
Also, I’d like to add that artist should be able to find jobs producing art feed…
ytr_UgwHfTkhK…
G
Open AI can have all my secrets, as long I'm sane.
Do you think talking to a str…
ytc_Ugy0S8pdU…
G
6:17
Wow, MS_14 is just acknowledging the gripe artists have instead of helping …
ytc_UgxlFulgS…
G
As long as you don't give full access of weapons to AI, everything will be alrig…
ytc_Ugz4QWD1_…
G
It is a lot of fun to watch Tesla fans still coping by saying, «I’m so excited t…
ytc_Ugy3aSNI2…
G
27:06 Alex: “ChatGPT, you’re actually being interviewed on my YouTube channel ri…
ytc_Ugx_-C0lU…
G
My dad has seen my screen countless times when I'm on character AI, and all he s…
ytc_UgwdFCzE2…
Comment
For the sake of argument, let's say Hinton is right. Does anybody really think that humans are gonna do anything other than let capitalism and greed drive the ship? We haven't done anything about climate change or war and did a really bad job with Covid. And climate change, war, and Covid don't provide major benefits for society. With AI, there is a lot of upside as well as possible catastrophic downside risk. Because there is more upside than downside for AI, does this make it more or less likely that the world will come together to prevent the dystopian scenarios? That's a rhetorical question. Hinton thinks a catastrophic scenario has 10 to 20% likelihood. Doesn't that mean that there is a 80 to 90% chance of a positive outcome? I'm not giving humanity without AI an 80% chance of a positive outcome. I'd say humans have about a 50% chance of arriving at a catastrophic scenario and only a 10% chance of reaching something that is more utopian. So, I'm willing to take my chances with AI.
youtube
AI Governance
2025-06-17T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxfElQUUJVqYyr3GcB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIk7885sGjlCvUH214AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQoqHtO5fYyIrFQo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyXpf1_N3uyU9LkjPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzGnqcg2l3mg-NU7H14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyc-APfwhZ7m0d0kbF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz-om2P64X4YBYLYmV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwW0wlgRG8I6PHtYRp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxMqA8e27ImT3G6Pmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQKkmL0KD52WMl6it4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]