Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI did have a point and it should have stuck to it. While technically lying,…
ytc_UgyzVc9Gv…
G
oh really !!!!!!!! Sam AI models will be teaching children 😂😂 all these AI mode…
ytc_UgwlxJLMr…
G
Unfortunately he - as the most of us - misses the crucial point of exponential g…
ytc_UgwbFTev0…
G
Israel‘s Foreign Affairs Ministry is Sponsoring the video?! How ironic! While us…
ytc_Ugx783UrB…
G
Post-modernism ruined art way before AI came along. It's main contribution being…
ytc_UgxHE-S4S…
G
AI is the new free labor (slave) which will take countries such as the U.S. back…
ytc_Ugw0dD-6F…
G
l don't get it , is there a container inside to collect the semen ?…
ytc_UgzQ2A0s9…
G
As a senior retired academic my vocabulary is substantial ... I have been comple…
ytc_Ugzh8jDce…
Comment
Question: how would an AI, no matter how intelligent, actually end the world? I’m not very knowledgeable on the technology so maybe I’m just being dumb, but there’s a fairly large gap between an AI concluding that an existentially harmful course of action would help it perform its task better, and that AI having the resources and freedom to actually perform that course of action.
When people talk about the probability of doom, what’s the mechanism by which that doom could happen? Is it that a AI could become so good at manipulating people, that it would basically be able to do whatever it wants? Is it that someone might choose to give it unregulated agency for some reason? Or is the main concern that we have no idea how it would occur, and thus wouldn’t notice the warning signs?
I’m obviously missing something, considering that basically every expert in the field thinks that AI has a significant chance to pose an existential threat. I’m just asking because the details of that threat aren’t something discussions of AI tend to get into.
youtube
AI Moral Status
2025-10-31T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjyMyY_O4NgeZAJjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxmTHu1yRq14lt75Dd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNWZ7nhSCvpXJJsDV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxT7RhFToA3B5KS5el4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwBfFsr_6_n16hJfed4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxm7-V2cw080X9sQZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyCiYAK2ms5Q0A5qhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxBMJKI2GG-3mj8Qi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyKp43VLPuelxIF9Kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxKjYNeaaZSElY40Qx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]