Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He lost every shred of credibility when he claimed Musk did not have a moral com…
ytc_UgzFu5zZp…
G
Is this discussion about how great Altman is and how righteous OpenAI is? Okay, …
ytc_UgzJxpASB…
G
They didn't feed in just two classifiers, they fed it many classifiers, but foun…
ytr_Ugwc04AuG…
G
Humans have made many advances but they are still weak and really stupid. The m…
ytc_UgypvK4zl…
G
brain washed people continue to think they are slaves and they need a job lol do…
ytc_UgxzMbJPg…
G
What if we sepatated AI into two functions: 1. Intellectual (like chat GPT) and …
ytc_Ugw04T-aL…
G
AI could perhaps be even more dangerous in the hands of a North Korea or Iran th…
ytc_UgzWvKNpL…
G
If the only way to access your content is to provide my ID, then I won't be acce…
rdc_n7f4xas
Comment
Yudkowsky is used to defending his thesis that a future iteration of AI will kill us all, and has a full pack of analogies make his point. For that reason he misses Wolfram's more interesting question of WHY is the doom outcome so certain? Wolfram already understands and accepts that human annihilation is a possibility, and even points out that in the universe, that's the natural state. However, he wants Yudkowsky to explain why he thinks absolute annihilation is the only possibility? Yudkowsky's weak analogies on the European colonization of the Americas don't seem to cut it. Wolfram should be given the Nobel Prize for Patience in this endeavor. The most fruitful section of the discussion is at 3'00 to 3'40 (approx). Interesting discussion. Would love to see Tim and Keith do a detailed review of it.
youtube
AI Governance
2024-11-13T22:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwRvWP_k7v_jN9-Te14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyksdh6rn-4hBjfu214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxlTd1d2AkohR8lVSZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxF1_HmuOODIl8KiOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzTMs1seu-Hm2wg1tB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzck-R6lKxbvEb8M5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyyLzF6cJe301DdxjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyL07Rq-EVfO1ActR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz6Llf_yDF9Gc34V9B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzaIf0jFeodxvBJt2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]