Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too bad, you changed your music format. One of the most entertaining aspects of…
ytc_UgzA3HCuN…
G
Developers have human context, Ai had technical context. Combined, in right amou…
ytc_UgyApWy6z…
G
depends on the nation. i am sure more introverted nations´ people, like japan an…
ytr_UgxJIVklM…
G
Hope AI destroys him. He's not a real genius that term is used too loosely these…
ytc_UgzzAW98r…
G
The AMERICAN SOCIETY will not change it's values. Instead of having an IDIOT "DI…
ytc_Ugw1K1kJL…
G
This is the first time in all human history where humans are no longer the smart…
ytc_UgxYsAEEk…
G
Try Clever AI Humanizer! It’s honestly made my writing feel much more authentic …
ytc_UgxPDqgVh…
G
So you are allowed to text and drive when using dumb cruise control that is very…
ytc_UgxQcEsvy…
Comment
I think it's overrated because I don't believe AGI is as close as we think, nor as effective as we think. We can barely figure out self driving cars on very specific city streets, how could we build something that navigates the complexities of everyday life when people are moving around and changing things non-stop? There's been SO much investment in AI already, hundreds of billions of dollars and massive data centers sucking up obscene amounts of water and causing massive amounts of pollution... for what? Shitty AI slop videos on TikTok? ChatGPT answers that feed bad information, make people stupid, depressed, psychotic, and suicidal? Think about how many more resources it would take to run anything close to AGI. That's not what anyone wants. We've seen plenty of pushback on AI content already and we're going to see a whole lot more. AI is going to pop like the biggest bubble in history, then come back down to earth so we can keep using it to save time on our spreadsheets.
youtube
AI Moral Status
2025-07-24T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwn1FyEI7IrTAbYGA14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxzZ3gwtw_Po1WDKxh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugx4sslyG8q4ROJ3kyl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-3e2if2ZvgmhajdB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQz9rot6gK2GqnaNB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8byu1XmUUxz3hdPh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwSEG7B-Q5zuvbgX6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugywa20mxvkwTIh24k14AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxc4B6bCl5g9HaoPgl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhIDif3NcBXj4EHR54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]