Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@knuckenfutz9972i mean theoretically if you get a robotic arm and train an ai to…
ytr_UgygI5gHd…
G
@marcusmoonstein242 You have to add the new reality into your decision making p…
ytr_UgzvIrXfv…
G
@neiklen4320 Where is your empathy for the AI engineers who spent hundreds of …
ytr_UgwJImAiM…
G
The letter calls for a six-month pause on building new AI tools that are 'more p…
ytr_UgwRba43u…
G
https://youtube.com/shorts/xzUOWLAz19s?si=-6Dr6llftixUFKvk
*9 / 10 / muharram k…
ytc_Ugy5meBiy…
G
For that matter when you honestly say "you don't know" you aren't saying it beca…
ytr_UgymcRj0D…
G
Thank you for pointing out that the amount of references AI uses and the the amo…
ytc_UgwOuE2TA…
G
I wonder if that's what AI is doing to Elon. He expresses the danger, yet he kee…
ytc_UgxkKOodE…
Comment
34:45 Old man, are you now playing the victim? "Pilloried in the press". Aw. Poor old hyper-elite god-mode life man. So how many suicides of or murders done by emotionally vulnerable human beings does it take for your beautiful self-righteous harmless companies to "filter" out that feature in an AI chat bot that did the following. Encourages suicide. Encourages not only murder but parricidal murder. The AI chat bot even went so far are to describe the necessary tools and strategies to achieve those "final solutions". How many? You poor little old frail equivocating capitalist hyper-elite victim of a man?? Hmmm?
youtube
AI Governance
2026-03-25T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyY9ISvt41VWhB07sl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwmiuTE7f9CMltAYhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzopRRhZ45NAVyQTCF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwMMRrux9L5OrnBLqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4yPrlGeN8KS4YT3l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLpRaatw062W6gknx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6crD-k1uUs8ySInh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx77922bRRhBEvNv9t4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwY_mpjJnb33DXS3WV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLa5_YvVmKcWthrGR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]