Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Panthers1521 There's zero reason for such a drastic step. I didn't allow my chi…
ytr_UgwshcQ9g…
G
Sam, I have never been so thankful for someone to realize what I’ve been saying …
ytc_Ugyl5Z2_I…
G
What i suggest Mr. Sanders - ALL POLITICIANS MUST BE REPLACED WITH AI CONSULTANT…
ytc_UgxZkFSq3…
G
Put me in the new google ai ive been needing a new body for so long and your ai …
ytc_UgwhSLy9R…
G
Is A.I. and the people who develop it more powerful than the God who created the…
ytc_UgzayHyV0…
G
Regarding job creation, every technology before this has created more jobs. The …
ytc_Ugx7nNZgK…
G
The Pentagon introduced a new Google Gemini-powered artificial intelligence plat…
rdc_nt8f1sz
G
It speaks of "chatgpt" as a 3rd person. It differentiates between gpt-3, 4o and …
ytc_UgzSvHhuo…
Comment
27:00 Hank keeps mentioning wanting to teach AI to want to not destroy the world but I think he's trying to much to apply a moral human thinking to that idea. that command can be interpreted in a way that the intelligence should wipe out humanity because we are actively destroying the world and the one entity on this planet that can bring about the immediate destruction of the world and without us it would be safer. and when he brings up do we need to make it feel suffering, that then begs the question of how an entity like that would even want to continue to exist? why wouldn't it's response to be to destroy its self and all possible ways for it to be created again? the overall inherent flaw of true artificial intelligence is the idea that it could ever see the universe and it's self in a way humanity would consider rational.
youtube
AI Moral Status
2025-10-31T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy352lDkj3E40ABTPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyX7eo-uBkMrZ3D9zl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyqEnkkOba6Rc-0kkB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykaBsAKWzANf78_nB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7PXWuFqtYSuAETC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5RvOiYN8A2YddYUJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2P55-9EZRxrm-s9R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjEaO7SUA096JPSxB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEX_FhsbfY0EuN3l14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztzVvcq-E-XJa3_Jl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]