Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, I'm not trying to downplay the huge leaps we've had, but until we start br…
rdc_n7sux2g
G
So, what are we supposed to do when A.I. lets us go? Who exactly are you produc…
ytc_UgyVsKElf…
G
Ai is becoming genuinely terrifying. If it's this good now, imagine how it's gon…
ytc_Ugyda24Op…
G
Playing Devils advocate for Aí tech bros and billionaires is a horrible idea, th…
ytc_UgyZzHkxR…
G
I’ve got years before a robot can take my electrical job, atleast before the man…
ytc_UgyONc5Oo…
G
The Lip sync features in AI Gen has gotten better. But can be better as it's qui…
ytc_UgxxzAn8v…
G
It makes my job easier and they want me to do it, so I'm doing it. Its not my fa…
rdc_o89w8w5
G
@Lawlesslarry69 No. I report on current events and interview actual people. A.I.…
ytr_UgxCf18nq…
Comment
I spent my career in CyberSecurity. Think I might have to come out of retirement and do some ethical hacking. We're going to have to blur the lines in what ethical hacking is when it comes to AI. Ethical for who? The people who's jobs will be replaced or the AI makers making billions more for themselves. When this 77 year old anti-social dweeb, who doesn't think he has any responsibility to even consider the implications of what he's done and its impact on anyone else, is telling us he's talking to AI about technical details of atomic bomb making, I choose the people.
Dweeb wants to act like Mr. Nice Guy now by giving us a warning. What good is a warning when you've already built the damn thing.
Says we need government regulation. I just said I spent a career in CyberSecurity. We have had standards, regulations and laws up to our ears for decades now. Have data breaches stopped? No, they have not.
Seriously, what an asshole dork. Hope he rots in hell.
youtube
AI Governance
2026-02-09T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz61dILxyCfMhtgzVV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwppycGKZG5q_6y3YJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcUJlkp9n-fZ5Z6a94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyC2PfqlPTXtPcLU5h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy_kEjZEy99UShnRJR4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzmWa1eLKya6cOpPdt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw2xzhOX0iJhgTs4bZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzleXoBOi1PFck8H1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYmcm1UX1xtKm7cg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymKok4ISAHbTTJvpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]