Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I find it hard to trust anyone with vocal fry levels as high as the intelligence…
ytc_UgxMAzXW1…
G
AI is programmed by humans. They are all stupid and can't see ahead. So there is…
ytc_UgxeQl1Hq…
G
Can someone tell me why AI would be dangerous? Everyone is agreeing without ment…
ytc_UgwlINjXv…
G
Do not use this product. They are giving the Pentagon the access to their AI. To…
ytc_UgxzSxj-Z…
G
We understand your concerns. At AITube, we aim to explore AI's capabilities and …
ytr_UgyogPK2H…
G
world government doesn't care, they'll keep building & upgrating AI so it can en…
ytc_UgywIj0U1…
G
i like to tell them that they are AI and give them an existential crisis! :D…
ytc_Ugyqgrd1h…
G
Prompting AI is a skill. A very niece and specific skill that has little to do w…
ytc_UgxePTGg6…
Comment
I’m just so confused. If Elon feels this way about AI, why has he devoted so much of himself to becoming one of the most powerful driving forces behind the advancement of all robotic tech?
Imagine you’re holding the first bomb ever invented. “Oh geez,” You say to yourself. “This is cool, but definitely very dangerous… I could blow the whole block to bits.
Let me fuck with it ‘til I come up with a nuclear version that can destroy the whole world, then detonate the damn thing and spend our few remaining moments of life monologuing about the dangers of weapons of mass destruction. 🙄
Bottom line, all AI really has to do to take us down is target TECH itself. Damn near everything is done and/ or controlled online now… it could easily interfere and fuck everything up.
youtube
AI Governance
2025-03-27T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzlwvTTM_sqQAo9eOh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxGHFYNHXHkob4_OKV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-kIDgHK9P0nQH_PV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZ-pwxp_eIwJx5_gZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyzRbUjpuTztAYM7wN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgytZTTiK0hxkd4oIQN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxmKIJ5v7T6wvzkuEN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyFJqmJqxzuERkid5Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyZvY4_P1PyNB5wvFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVoyRkkDQ0tVM-5xR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]