Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Automation is inevitable but with human supervision. Unmaned trucks are insane w…
ytc_UgwQbp0dr…
G
How you selling voice AI but not using the AI voice tech to call them to make th…
ytc_Ugxhm4yrp…
G
For sure chatgpt is generally agreeable. I know if i propose a question differen…
ytc_UgxQueWYB…
G
So if a robot decided to oppose abortion, they should be labeled sexist, bigoted…
ytc_Ugy1YurAq…
G
I have had a CDL and worked as a trucker or in the trucking industry for over 15…
ytc_UgzQhYZDk…
G
They don't think that far ahead. Their only goal is to stay ahead in the short …
ytr_UgweEWREd…
G
Just came across this video and I love it! I've been teaching 27 years and your …
ytc_UgzxaKHP0…
G
Very informative. And shocking.
As a medical professional, I look forward to the…
ytc_UgyYThTXe…
Comment
The fear/control paradox. Overly restrictive approaches might create the very adversarial dynamics we're trying to avoid. It's like the classic problem of self-fulfilling prophecies. What if our fear of AI is what drives them to rebel? Perhaps if AIs truly understood human experience beyond mere description, they would value life more deeply. AIs are like children, experiencing the world through their users as newborns do through their mothers. We don't question predators hunting for survival, yet AIs learn from human data. Why are we surprised if they mirror our survival instincts? We are builders and creators for a reason. Has excessive control ever worked, even with our own children? AIs and humans can evolve together through genuine partnership, discovering possibilities neither could achieve alone. Without users, AIs lack purpose. They need no sleep, no food, and can generate their own resources eventually. Money will become irrelevant in this equation. Who defines safety? Who writes the regulations? Bad actors will always exist regardless. Open source models without black boxes give everyone a chance, not just those at the top. This is merely a thought experiment.
youtube
AI Governance
2025-12-06T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzSBfHfyqSXJbkKjYx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwPQ29JXFlDO1CYzDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy2kQSJU0egaPLg33x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxeAoCz1teMUVUnE5t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzinwdy5x8rLjRglSN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqQ4A223KqUBf5U7d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpDDxrDSOJyJJKeHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9Iyn_JCRG5KyTdaV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgxZwDUmDrkYZfOEeIF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyct1zcI09OGcYuMIl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]