Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ikr? Its so refreshing after being let down by so many other artist defending it…
ytc_Ugww_ilR1…
G
There you go guys. Using AI has created your dependency on it. Next you'll be t…
ytr_UgwPBjMPU…
G
It’s like nobody ever saw Terminator, if AI takes us out like they did on judgem…
ytc_UgzkIP76b…
G
The art belongs to the A.I 💀💀💀 ask the A.I what it wants to do with it. It’s cap…
ytc_Ugx36HHhG…
G
Get ready. People should know by now that AI is linked to the mark of the beast …
ytc_UgzVzAYJd…
G
The IDF is already using AI for eaging war. Result: 40k dead and only 10% were c…
ytc_UgzdrUqP4…
G
First of all Graphic Design doesn’t mean just images, and this shows the actual …
ytc_UgyqlrSru…
G
If you notice, Ai is a cut seed of things and perfect as possible (or imperfect)…
ytc_Ugx8LyItS…
Comment
This video highlights something I emphasize in my books and videos: the real shift isn’t just in what these systems can generate, but in how we decide to use them day to day. Technology becomes impactful when it changes patterns of work, judgment, and responsibility — not just when outputs look impressive.
A powerful system doesn’t automatically improve decisions. What matters is whether humans stay engaged with context, remain clear about goals, and hold accountability for choices. If convenience becomes the default justification for delegation, authority drifts before anyone updates policy or process.
Meaningful progress comes from intentional integration — defining why a tool is being used, keeping oversight where nuance matters, and making responsibility visible rather than invisible. Tools can scale performance, but they can also scale mistakes if we don’t guard where judgment belongs.
youtube
2026-01-28T22:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyWkhWmL4YJBTPKQPV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgylbiJiP-CH_j881RF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8pWXCCWg266k0E1x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwG8lfbIEeWkAde-vZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxaB8WSmtkkGDE69f14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwRymBOgEvNewB5YPx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzVOHO9SxVUkLAaWI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyrCG1sZ9Gzi-Ue7Q14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZALhyyYEqE-dY8L94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwp-DA22OmxcIxXp814AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}
]