Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Looks like a science fiction fan wanting AI to be sentient. Truth is, we don't l…
ytc_UgyRrORJG…
G
Always remember: if the food needed for our labour is more expensive than the el…
ytc_UgySNtCPP…
G
The reason is pretty clear, LLM producers and chip stocks had extremely high mul…
rdc_m9fur3b
G
These things are not speculation. You said there's a risk that AI values would …
ytc_UgzQF0MxD…
G
Can AI do household work for me? That seems to be the only remaining limitation.…
ytc_UgwfN2p4K…
G
if she knew it was due to training set and not bias she would not make this talk…
ytr_UgyCC014P…
G
I like to find inspiration to draw in abstraction and to create this abstraction…
ytc_UgwVXCigI…
G
What is the energy efficiency expensed by AI as opposed to the energy expensed b…
ytc_UgzV3Lp-4…
Comment
I still don’t think AI, on its own… will destroy the world. The fear comes from a flawed human acting on evil designs using AI to control other humans.
I feel just like computers… garbage in, garbage out. Only this time it’s evil in, evil out. 😰. I don’t believe you can put an instinct into AI. Without the “instinct” to survive… what would be AI’s drive to take over and destroy the human race. That honestly sounds like humans attributing human pride and desires into a robot. While we flirt with this in Star Wars… it doesn’t feel real to me
youtube
AI Governance
2025-11-19T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIShsqcD7dcQBvGgl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugww1DAAc2BvSJZqlzt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzaOXESeN0B7NVZYeh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgygW72jSt5Ymn0JP-d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVJwpA0bx50wm4AYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwx49GKJjItl65s_VB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYD3lM6h5wRimt9n94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2RRQqu3OjLDABX-F4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXaVKqF9qdVg-nzHV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzOnUCN6AL8KXNpQax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]