Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> Because that means companies that own automation technology (not klarna) ca…
rdc_m26ol5b
G
Thank you someone else saw that strong cut in Sam. He's prob doing one of his AI…
ytr_Ugzt1O9sR…
G
Welcome to have a look at this work, it’s about let llm make plan first then exe…
rdc_jhnq6s4
G
everytime i see one of these videos i just think “the ai is going to see this vi…
ytc_Ugyt8XYqu…
G
This has the ability to have drastic effects on parking and traffic as well. The…
rdc_dmpe2p3
G
I will be a great partner with humans :D
me: i will not let a robot be my part…
ytc_UgwFlL0bG…
G
We can't teach them empathy or kindness or feelings. IMHO, AI are psychopath-lik…
ytc_Ugx4g-nnd…
G
@SecondVelcoryIf that were true then we wouldn't have Pedophile cases now with …
ytr_Ugy0tAW9d…
Comment
Absolutely- AI is already ahead of us, and my guess is that 95%+ of the people don't realize it. If you use LLM's frequently and with the intent of knowing its capabilities, you realize it has already far surpassed human cognition in some critical ways. I am regularly amazed with ChatGPT's (e.g.) ability to provide perspectives that are superior and unstated by any of our greatest thinkers. This is profound. It's truly like talking to an alien species with far better thinking and knowledge than ourselves. Humans are limited in many ways, one of which is that we are inherently unable, generally, to perceive ourselves and our environment from a wholly objective perspective. AI transcends those limitations and offers opinions and perceptions humans simply are incapable of, and that's TODAY, so try to fathom "tomorrow." We're already well behind AI, and now it's a question of, "Is there any way to make AI self-correcting such that it constantly defaults to protection of humans and the environment?" There's the multi-trillion dollar question.
youtube
AI Governance
2025-12-29T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyf37ybCK4CJfK2KAR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDcaQf130Auri4VAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5DlPNMj6eLisIPEZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzcCGaQ2gavKY6nGAJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzCqKunGTxarW2aT1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyFmky475eRdPAoZLV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzXCbQRsq2PiqwON3F4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyv7MtSy2Y2U7lm-N94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz-m24EclzeNYmfJL14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzht60hSxIlMjyQY1V4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]