Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need ai to become intelligent enough to be able to quantity feelings and emot…
ytc_Ugy0FaL38…
G
Don’t feel bad for these union workers. None of them have ever worked 50 weeks a…
ytc_UgwA-Xoqt…
G
Well I guess it's time to start learning how to decode things. You're going to h…
ytc_Ugxc9KRyY…
G
Pretty much all low/ middle management jobs will disappear. Employees will be re…
ytc_UgzigGuW-…
G
People worry about joblessness but what they're not appreciating is that the end…
ytc_UgwkBaImv…
G
I didnt get the point. 99% of people will be unemployed, so only about 90 millio…
ytc_UgwveuujO…
G
everyone is too AI dependant and moving in a very fast speed.. it s almost unsto…
ytc_UgxWPWkM6…
G
We have a bunch of egotistical, tech bro screw heads pushing this forward come h…
ytc_UgwnIOzCw…
Comment
I found this video deeply compelling and thought-provoking. It explored the profound question of whether artificial intelligence, if developed without wisdom, foresight, and ethical stewardship, could one day pose risks to humanity. Rather than presenting fear alone, the discussion encouraged reflection on responsibility—how the choices societies, researchers, and leaders make today will shape the trajectory of intelligent technologies tomorrow.
What resonated most was the reminder that AI itself is not inherently destructive; it reflects the intentions, safeguards, and values embedded by those who design and deploy it. The video invited viewers to look beyond sensational narratives and instead consider the importance of governance, transparency, and human-centered innovation. In that sense, it was not simply a warning, but a call to cultivate thoughtful leadership and global collaboration so that advanced intelligence evolves as a force that protects, elevates, and benefits humanity rather than endangering it.
youtube
AI Governance
2026-02-08T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyo2iw7o0S2WL8X0qh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzyyI8n_MRqdwZcHrJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5uIzAxyBDkdObF5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0MnUUSTlOUJNju5F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyySMvaAW0fZq9NPMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzk3w52tTXw_jPGXUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7MMTlzHA2yi-fglV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxTyMhxSYgCdQJPhcF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzkfuYCBa7sKmC5-M14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyP7RIgnfpA1EPFLzd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]