Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just goes to show that we definitely shouldnt be using AI for any of these. The …
ytc_Ugx8m_xM8…
G
Maybe AI won't get stuck on if life is aberrant in nature to the nature of the u…
ytc_UgxtpzWAN…
G
👽The answer depends a lot on which religious or philosophical viewpoint you look…
ytc_Ugzq4wonJ…
G
@bitter__truthss thats just an example , to use ai they need some technical know…
ytr_Ugz5xSODW…
G
Why doesn't Zelensky just build a wall to keep russian missiles out?!?! Is he st…
rdc_jxyjunb
G
@newyorkfan16 The way I personally see it is eventually there will be a world wh…
ytr_Ugws_6g5_…
G
Here's the thing, there are actually a few talented artists (and animators) who …
ytc_Ugz1fgvrz…
G
Almost 90% of jobs were replaced within a generation by machines in the 1900's..…
ytc_UgzVLMEz9…
Comment
This is fear-mongering. Anyone actually working with AI knows it can't handle complex, non-textbook problems without constant human supervision. LLMs hallucinate, make basic logical errors, and fail at multi-step real-world tasks. The idea that current AI tech will somehow become autonomous enough to replace entire workforces ignores fundamental limitations that aren't close to being solved. AI is a tool for specific tasks, not the superintelligent replacement this scenario assumes. The 5-year timeline is pure hype.
youtube
Viral AI Reaction
2025-11-25T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxMKpWKR9SFFfG1lN14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymOorCmcZQlZGfwSV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy61WtWi9pNUINKUzp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXSrT19ZT374FrDtJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4g2d8i7PsoMdwBiN4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwjvDw5375Z02X053l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZYguZ_XMLhn8hjNV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx0jZOMMoUm2PAVw54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwXhxHkfQWChMJy6j14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyYLWqsAjiActJD_Nx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}
]