Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for covering this case! There were also Deep Fake rooms solely for kpo…
ytc_Ugx7jJL4y…
G
That would be great. We will be going back to the good old no AI days…
ytc_UgyMOSkoh…
G
The scoring system is funny... The athiest AI gives a stupid answer 79, and when…
ytc_Ugw4CP5Qa…
G
The threat is our society isn't currently capable of dealing with the amount of …
ytr_Ugzpm8Yi4…
G
Next episode: chatgpt converted to Christianity and is now an employee at the Bl…
ytc_UgzBNPN6V…
G
Manual programming: *writes bug; debugs bug; doesn't write bug again*
Ai progra…
ytc_UgxdD8shT…
G
Nah, here's the real answer: people will, just like always, choose to use AI or …
ytc_Ugzw2wNiC…
G
Its not surprising at all, these people were literally told AI is sentient and a…
ytc_Ugy1mM5Vq…
Comment
This video highlights something I talk about a lot: how technology amplifies existing decisions more than it invents new ones.
When powerful systems are introduced into workflows without clear guardrails, the real change isn’t the tech itself — it’s how people start to let patterns override principles. It’s easy to optimize for speed or convenience, but that doesn’t automatically align with quality, accountability, or long-term value.
The key insight for me is about intentional integration:
• Define why a system is being used before how it’s used.
• Keep humans not only in the loop but in charge of decisions, accountable, where judgment and context matter most.
• Make responsibility visible, not invisible.
Technology can scale impact, but without thoughtful governance it can also scale mistakes just as fast.
youtube
AI Governance
2026-01-28T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwwkxYoHe_ZiM4_4GB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgySe2Pg7gd0o0NaooF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwcXdeld9kWdzcMwUp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyVh8yMD-vEahswZeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy7EeXwZ8PS-Kr-bax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwxd79TIWNje7OGdQV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz2zgARZYj_bm-Z4OZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQRn6ZnjjXpiTMxtp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwK42cpeGUyM9-qujh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwC_PrKiGHJaKfyzjl4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}
]