Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just bank on the fact that most people don’t code, currently. They don’t under…
ytc_UgwQ6TaKf…
G
These gatekeeper comments are insane. This is not going to disappear, literally …
ytc_UgzNQfLjH…
G
The EU have every right to legislate on AI in their jurisdiction. It's not the U…
ytc_UgyBfM-j8…
G
The Biden admin is the last group of people I would want to oversee regulating A…
ytc_UgxFXmoFI…
G
This is disturbing....man seeking to create his own helper because the helper he…
ytc_UgxzWIJRP…
G
Bro it's really good video but I want create this image into vidoe like an ai …
ytr_Ugz06lB8n…
G
I've been debating whether to focus my education on entering a job in software e…
ytc_Ugwq8vVpa…
G
Cooking instant noodles took more effort and time than generating an AI art. At …
ytc_UgxuxKAVH…
Comment
AI and AGI are the biggest existential threats to humanity. They are going to destroy life as we know it. As it grows, learns, duplicates itself, makes more "agents," and eventually surpasses human intelligence, wipes out millions of jobs and dehumanizes societies, AGI will create the end of civilization, in time, as the machines take over and eliminate "the eaters" who are unnecessary and costly to maintain on Universal Basic Income (UBI). AI 2027 presents a bleak future...man-made change is coming and not for the good of humanity, with the goal is eliminating all humans. To paraphrase Robert Oppenheimer, “Now AI am become Death, the destroyer of worlds.” A human-created dystopia for short-term profit by a select few...
youtube
AI Jobs
2025-08-21T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyQYU3UapTR6yMLNjh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzuGdJJ7nfJKID1Qkh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8IcCaDE2Vu1S2pEZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTxUZ4pYiSLZgjkv54AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDrlq7qpK51cmwb1J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxSkssnQRHE9mUeMF14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw14ik_AE9zTNAlQBB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugyqo3ZsTCwsrJqtEoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyXf5KBUnMCRa0bkY14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyre2sVS6cBLabtr1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]