Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elias Velin really nailed it when he said automation doesn’t erase work, it just…
ytr_UgzGfb45t…
G
Yeah perfect reason to despise AI. She just proved it’s only going to help busin…
ytc_UgxQr1DEs…
G
I have a theory in my head that the interface to the web goes through a cycle of…
rdc_nualny8
G
Not convinced with AI but I have something similar that I
"Fully control." Inte…
ytc_Ugx5b3yrM…
G
I feel like people are becoming dumber and dumber with better technology. You do…
ytc_UgxW9oIc5…
G
You pointed out cars going through reds when they were ambers, you complained th…
ytc_UgzV7Qdlc…
G
I will always prefer human writers over AI. I will always prefer AI over woke ac…
ytc_Ugwc0zQBu…
G
I think you make a really good point about AI and freeing up time.
AI as a too…
ytc_UgzvAcLt9…
Comment
It's interesting what you talk about at about 21 mins. AI is learning from us. We tend to see ourselves as inherently good and right. Yet we all know every one of us has fallen short. All the lies, deception, slander, cheating, stealing etc. AI is going to learn that from us by studying us. We enslave, we seek to dominate others. So why wouldn't AI?
I also wonder, when we speak of "consciousness". Perhaps it would be more helpful to discuss "conscience". I don't believe AI can ever be a free, moral being like humans. Sure it can be super-intelligent and our physical movements can be automated by a robot. But there is no self accountable soul in an AI. No link to God.
youtube
AI Governance
2025-12-30T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgydfDnWByRMy8eOgqB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz79UJE8A2eCkDNfsd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyF5dTm6IrmP2F4ysJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwvMHn5CdU2mJ72r0l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwjy4xlBkI2SXv6YmR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHNZFA9hsMwEy431B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzjoGRxjjdc4n3GL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyD4X4itzgrIrquPiB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg2M4bZRhr8CHb6MB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzo9wl5FiFUURlW3uF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]