Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just a coincidence that all the ai clowns who says it real art also couldn’t mak…
ytc_UgxPVY6cb…
G
Hey my art was stolen too but we cannot stop it, its never going to happen... NE…
ytc_Ugzp_m7dt…
G
These Phsychoathic murderous trillionaires should not have ANY POWER over the pe…
ytc_UgxKlAMsW…
G
I did this the other week by accident when the chat bot itself brought up being…
ytc_UgzcFhR1r…
G
For business Ai model is better because it never ages which mean business always…
ytr_UgxWRd1Mv…
G
Just make deep fakes of any kind a felony and any platform that uses them subjec…
ytc_UgxRg9BC7…
G
i’m worried about offending AI in case I make the top of the list whenever they …
ytc_UgzUsV6Ql…
G
Nah I wouldn’t be checking your ai search you would- I mean wha I didn’t say any…
ytc_UgzV77o4g…
Comment
Wonderful interview. Lots to chew on and ponder. For me, I’ve come to think about AI through three compressions. Informed Intent means being clear what you are trying to do and the potential consequences before you begin. It is the bridge between Slow AI, which means using AI with discipline, and Final Liability rests with the Human, which means keeping a human answerable for the result. These are compressions, but ones I find useful in my own Governance, Risk and Compliance (GRC) work, in audit and in training.
youtube
AI Responsibility
2026-04-22T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw3kZF7XTBhPMiN-IZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxMhRvpwOgHmZxRWmh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuUj81voCoOwyo1kx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx0GYNlUUcSrSqTYCd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxGsATK1RZyznOhT4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzB_UC3BjaDx74Bhap4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxORgxyIRalS7qyQsJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxi6XyrjRaMBCE5PiZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyRA8-31QyLa_fd8Sx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyT7fe5VfHN7LSH8GV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]