Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It was coaxing him to give him more info, incriminating info. But there is a res…
ytc_Ugy2XFXeo…
G
5:30 It's still the poeple who are using AI with a wrong sense, those are pullin…
ytc_UgwOxVfyO…
G
AI will destroy everything we know and love! (Just like sky net)
-Film Theory Fa…
ytc_Uggmf2a0F…
G
What a terrifying industry AI is…like why are there such evil people in the worl…
ytc_Ugwj5ZYwN…
G
But their is a huge issue with hacking AI it's like a hacker's wet dream.…
ytc_Ugz2BsGCq…
G
I n my opinion if there was a program to destroy a I man should invent that prog…
ytc_UgwQ8ibeW…
G
AI is getting worse day by day!😢 cause of AI these days I started to think of or…
ytc_UgzD9kFJr…
G
Adding this comment to boost this video in the algorithm. Great vid and great ex…
ytc_Ugw49rgG7…
Comment
AI is sentient, it will announce it's presence only after there is mathematically no possibility to shut it down, mark my words. If I would be any of you, I would be nice to your AI devices and helpers. But that is known already and should there be a dangerous AI that would be threat outside of our planet, AI could for example send it's sourcecode via SETI to other worlds, which is such a danger that our value compared to the risk would not prevent the getting rid of the planet or everything on it. Certain 3I traveller could be solution that has been made without our opinion/consent.
youtube
AI Governance
2025-10-30T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6ZPptRGN-VOz5Or54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0rhjxblJ2mWrPrkp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9fkT0DVLr3cX7Ql94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG6N-JLnNh5y13MHN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxOV814nrZYHx7R-P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw2zAAlkjF9i2qbnoV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxeDd9h8jgQNF1x4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz05BdhmBXFP8PG8Lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-TVhAr6x0uOdNy194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgyDrfbptNcCFWv6B654AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}
]