Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nothing that I disagree with. P.S. Not that AI isn't "perfect" today, it DOESN"T…
ytc_Ugy_uEjQo…
G
20 years from now when it’s at its peak. Can a hacker “jail break” it and set it…
ytc_Ugxte0EPc…
G
@HelpMe-bc5ym Your reading comprehension is garbage. Facts are not automatically…
ytr_Ugx-d5OYA…
G
Like a leak from Wuhan (speculation, right?) someone is developing AI better tha…
ytc_Ugzf879sM…
G
With latest versions of AI song creator systems, it’s good enough (if you levera…
ytc_UgzO5rQuR…
G
JOBS aren't the goal of Humanity... peace is..let AI take your Job, but make the…
ytc_UgxYwKKO8…
G
I feel sorry for the young man and the parents they are on truth and on track bu…
ytc_Ugxx9vaBB…
G
I've never been robbed by a white guy.... what's wrong with the AI again.....
A…
ytc_UgyU89eJk…
Comment
This video feels like the "we're destroying the planet so we need to figure out how to colonize Mars instead of how to stop destroying the planet" approach to AI. Completely disconnected from the reality of what "AI" is. Utterly misguided in what needs prioritizing.
Should there be guardrails to prevent a destructive superintelligence? Sure, why not. Is this an immediate concern by ANY measure? Absolutely not. There are so many more pressing issues with AI today - the spread of mis- and disinformation, the criminal (and the "not actually criminal but ought to be") uses, the environmental impact, the impact on education, the security risks...
The big scary superintelligence coming to get us really seems like more of a boogeyman than anything, especially compares to the current, urgent issues.
youtube
2025-11-28T03:2…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyE65Df8bABbhROkZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyezeg17EFbsbdwvF14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDxTGlPg8vzKyIqDR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRWT-18Hd89OH771p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwiDVf8j4lr_rhnAwh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwSl8stZUIEEWD_gFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5UAvXA8uK5e0XQ5B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe0HDJsAF8rNAbG5V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYw9TfQf6AriJbpYx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugx2EOTv1FzR635MsJt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"disapproval"}
]