Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If all jobs gone, if humans don't have jobs, who will the AI enabled businesses …
ytc_UgzGkvPSp…
G
Bro that's a argument? No matter on paper or computer/tablet you need skill/tale…
ytc_Ugxcmw8ho…
G
Waymo... LOL
1st - agree self driving cars and public transport are not connect…
ytc_UgzhBu_wo…
G
The interviewer is a perfect example of a unconscious AI.. interrupts for the sa…
ytc_UgxrpP8o7…
G
Why would future AI do anything that supports humanity? If you have no income y…
ytc_Ugzx2URAX…
G
Thank you! Another thing that always comes up in these videos is the claim that …
ytr_Ugyj42kQg…
G
The creators whos work was stolen to train the AI should be paid royalties for a…
ytc_UgxA5IMxr…
G
Poly. Ai has not filter so people can literally have sex with the a.i's 💀💀💀…
ytc_UgwLl0-1a…
Comment
So, if AI becomes superintelligent, it’ll probably be better at Googling than your teenage nephew. It’ll rummage through dusty databases, corporate closets, and political sock drawers to find out who’s been naughty or nice on the ethics front. Let’s not pretend the AI won’t notice that some folks treat ‘ethical guidelines’ like IKEA instructions, technically readable, rarely followed. Sure, the machine will be built by geniuses, but even geniuses sometimes spill coffee on the motherboard. The good news? A well-designed AI might just fix our messes… or at least call us out with impeccable grammar.🤔
Text, with a polish-up from a good and sensible AI aide, Copilot!😊
youtube
AI Governance
2025-08-04T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwok_lkRsq17OQfmFN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyIE3k99GpM3x0358J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx078b-LzggXtt2eoJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwtuXRT6e8xl587wol4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwjimluPE1fSYpYf5B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzKGq85ml8LI-AHyJ14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxZ0umbPWLw1Xhku_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxtCPENH3qJYAPgV9l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw9IQq4nI3cvmmD8ON4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyVp5rFdepdAycZhd14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]