Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Considering humanity 5 years ago it's understandable for it to think that way(Bu…
ytc_Ugx-5sGqg…
G
AI is a life saver. Better than any advisor and friend I have ever had.…
ytc_Ugx3-fIU5…
G
Yes, no joke, Quantum computers are coming in 2032 combined with AI, robotics an…
ytr_UgypiBddR…
G
Imagine a world with quantum AI. Whatever country gets there first will rule the…
ytc_UgzJ6vCHR…
G
No soul AI just takes away everything if you've seen the recent AI it's becoming…
ytr_UgzF2Xinj…
G
Warnings aren’t predictions of doom—they’re signals to prepare. AI won’t replace…
ytc_UgyjMjvy5…
G
One day they won't allow you to drive your own car. That's their goal. One drive…
ytr_UgwMUsTZF…
G
On a whim I tried your tessellating hexagonal grid Github Copilot with html/css…
ytr_Ugz1Rwm3d…
Comment
This guy is an idiot! Warning of all the dangers of AI.... WHILE investing and promoting and researching and developing and pushing AI! I think he is the one that is dangerous..... Have you ever heard of that old trick that they use where they create the problem and then create the solution, so that everybody has to go to him to get his product to be "safe" from the dangerous AI that he created!
youtube
AI Governance
2025-10-09T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzO5LhwuoCgORFeDOl4AaABAg","responsibility":"elite","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzLzKHUMLHHlxG8S6V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugx2IvBSBciYkyrE5et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxRt_aoYVRnteHf4Jh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzuw7Sjnd4LzNQ_yZp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwJrodgsNfAeRKCToN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyRiYZFD24IoKtHKyt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwvue7WdAw014xGoXt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyPJEnYNlriSKKGMJt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxCAt_18Cj5TjMgJN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]