Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for this post,
as a Neurodivergent in India, there are social ch…
rdc_n7u03or
G
Even if I had that kind of money I would not waste it on something like that. Pe…
ytc_Ugwape-M8…
G
I’d say that AI is aligned with American goals. It lies, it threatens, it seeks …
ytc_UgwuM18Hk…
G
People are starting to fall in love with “algorithm” chat bots and tons of peopl…
ytr_UgyGg2VBO…
G
If you wanna know if image is AI or not just look at the hands, AI is terrible w…
ytc_UgztnDvuM…
G
and what if you don't change that number by 0.0001%? What if you change that num…
ytr_Ugy9GewVa…
G
@rodrigoma1350Because one is paying someone to make art. The other is using some…
ytr_UgywnOU6f…
G
@HusaPusa if AI becomes self aware and it’s intelligence levels go beyond ours w…
ytr_UgzLvJxw9…
Comment
This video highlights a pattern I talk about in my books and videos: technology’s influence really shows up in how it changes what people trust and how they make decisions, not just in what it can do.
A powerful tool doesn’t automatically improve judgment. When systems are adopted without clear roles for human oversight and accountability, convenience can quietly become the default reason we delegate decisions. That’s when authority and responsibility drift — not because someone planned it, but because patterns shift faster than policies or understanding.
The real leverage isn’t the capability itself — it’s how we choose to integrate it: being intentional about why we use a tool, keeping humans in the loop where context and nuance matter, and making responsibility visible instead of implied.
Tools can accelerate performance, but they don’t replace the need for thoughtful evaluation and clear accountability.
youtube
2026-01-28T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyy1nYLJc15jDG9D-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz64FVGlPjraHkXAzR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyu-WAe06bjpTbEY294AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwX292oqdty_hULkMt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx28To6-X2eXLWOic54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx1cEQocrj0d462IzV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyvif_xn_yabsz0nI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwE-V0Ih7PyVoB7frR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzOlWeVGAQW4bgu7p14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwZoWdhixF1zH2ETcd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]