Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've said this before, but it's worth saying again. 10 years ago, we could progr…
ytc_UgxGyVzJl…
G
Then there will be global income for all.. it is okay we should not be worried a…
ytc_UgxzwGSwv…
G
I partly think these tech freaks want LLMs to be real because they’re lonely los…
ytc_UgwpxpaOP…
G
If you're using AI to do it for you, it's not your story, and it's not your "art…
ytr_UgwOsCfFQ…
G
if I do pay an animator for the final version, it would still be hated by the ar…
ytr_Ugxc_If0q…
G
😂😂 this will never work u men already scared of having handicaps if u want a …
ytc_Ugykwdt4F…
G
Hey I'm on the austim spectrum also. Discovered this only 6 years ago. I'll chec…
ytc_Ugz4kSgLW…
G
AI, which honestly we should stop calling it AI cause its not, is a tool. When u…
ytc_UgyAE92RB…
Comment
@ParameterGrenze He's talking about how Elizier always assumes the worst outcome from a complex dynamic and almost unpredictable series of future events. It could just as easily result in something as good as eternal life.
Eliziers main problem is that he imagines what ifs and appeals to emotion constantly. I've never seen anyone call him on his unsubstantiated assertions. Thomas here is saying any argument Yud uses could be equally applied to the opposite of what he says. for example: "What if it kills us all?" "what if it saves us all?" "well it could kill us all." "Yes but it could also save us all."
He adds a LOT of implied logic, jumps in logic, and numerous fallacies. Which results in everyone trying to address him as being stuck in a reactionary cycle following his nonsense. For example: "If its not coherence then its not doing stuff and openai throws it out then builds an ai that is doing stuff and is more profitable. until everybody is dead." yes of course a company will make a product that works, but WHY would it automatically be doing that until everyone is dead? is it killing everyone? is that THE necessary goal of it? is he just saying it will do "stuff" until people pass on naturally? well he certainly appears to be suggesting it kills everyone and that is THE certain outcome. which it isn't. but wolfram when at the heart of it, the assumed death cult. the worst case which Elizier made a name for himself over. Eliezer dismissed that and went after openai>anthropic>meta and on the reactions go. They cut there and who knows what was said. they move on without addressing WHY he assumes the worst out of all of that.
youtube
AI Governance
2024-11-12T08:2…
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxWbWU8HnwXTGqsyxd4AaABAg.AAiYeR1UICtAAimhHmYdLd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxWbWU8HnwXTGqsyxd4AaABAg.AAiYeR1UICtAAjRPne2vJB","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxWbWU8HnwXTGqsyxd4AaABAg.AAiYeR1UICtAAjgxLAap2x","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw65QRNZqXybNcskx94AaABAg.AAiXbYlEStfAAmmuu4wRqq","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugw65QRNZqXybNcskx94AaABAg.AAiXbYlEStfAAokZRx4eik","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw65QRNZqXybNcskx94AaABAg.AAiXbYlEStfAAqA9Jy7zvA","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxJUObSq16yHsAotv54AaABAg.AAiV57RtZaaAAiWFsL8hXk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugz_o_4UNQo_GhkZI294AaABAg.AAiUrQt1I1zAAidX4Bf-1N","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxWZtuw753_fj8wEad4AaABAg.AAiRc3k91HjAAjBSmlsX0M","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwqfDGCzxp8g181J_54AaABAg.AAiNXtAPvrsAArCz1Ewg7J","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]