Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can clearly see that it is fake. Fake skin does it even look close to real ski…
ytc_UgyHbLKlV…
G
What this gentleman refers to in malevolence, in conjunction with the gentleman …
ytc_UgzKBSAbq…
G
It'll be so cool if more artists actually come out of fighting against generativ…
ytr_UgyV635dX…
G
AI makers should not be called an artist. It's not artificial intelligence in th…
ytc_UgzTCf262…
G
I am quite scared of a world where AI is more intelligent than humans because yo…
ytc_Ugz1PhBYt…
G
Your lost to think you're going to market a brand deal for an LLM company, while…
ytc_Ugyln06On…
G
Im proud to live in Colorado, the first state who passed a law to regulate the u…
ytc_UgwFUk2xH…
G
@LiterallyRain So heres how metaphores work. When done properly at least. It ei…
ytr_Ugxd1uFYy…
Comment
I attended a lecture by Geoff Hinton in 2018 while studying computational neuroscience and cognitive robotics. It left a deep impression—he was one of the very few AI researchers at the time who even mentioned ethics. He did issue warnings about where this could all lead, but honestly, no one in the academic AI community—not even him—anticipated things would escalate this quickly. We all know the reason for it is pure greed. Because this is the warning he issues. That’s the core reason we’ve lost control at the societal level. We don't even have one hand on the wheel at the moment. Too many players are trying to cash in on what they believe is an "inevitable" future. The breakneck pace of AI development over the last six years has outstripped the ability of regulators, lawmakers, and society at large to keep up. Those are not only jobs at stake right now - we already know AI is profoundly altering the way we think and function, and not always in good way. Scary times ahead!
youtube
AI Governance
2025-06-16T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxBPOatUE649J6EnIN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjlHTvUmH8NGseLSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzjA6g3CIf9W5qpJTd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZeWybmm68Og5K_f14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzys_q2gQ6ncBuWAUF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3DBCAztbHgNzlzOp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBwrP1dWuv5W72mvZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz_k9GKdXP8izXVcZV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz_uVhuObLO7RLhlqZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzdkJilvcZ6F6OveZh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]