Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
On the subject of "doesn't work", over time won't the poison just train the AI t…
ytc_Ugy_FNDVh…
G
Even the greatest minds have failed to define consciousness what it is or where …
ytc_Ugz4ruRLm…
G
Society needs to continue building fundamental skills, learn to learn and get st…
ytc_Ugwm45k42…
G
This is the AI doing EXACTLY what it should. Not flipping out when getting hit, …
ytc_UgyHTFRm_…
G
Increase your views by decreasing your politics. The REAL threat of AI is these …
ytc_UgzlYlBhy…
G
you can humanize your text but Winston AI still picks up on the ai tone. it’s a …
ytc_UgwcMWP6m…
G
This theory is ridiculous to any adult who's worked across multiple industries a…
ytc_Ugx5NR_F7…
G
you forgot me, the person that loves killing anything that breathes in ai relent…
ytc_UgwwwyVpL…
Comment
I understand a scientist's excitement about their work. But they often dream of unrealistic things, based on some ideal scenario, that lives only in their heads.
The current reality of AI is, that for the most part - those are just delusional (more often than not) bots, that can solve only limited tasks.
To keep it short - humans have features, given by God that no machine or technology will be able to replicate ever.
So, I'm not worried about AI. But I am worried about the Idiocracy, that AI will probably push the world to. People will start to rely on not very smart AI bots and will become dumb, lazy and maybe even more evil. AI from such a perspective looks like a powerful push of the human race to the entropy (in many ways) and that will introduce a lot of chaos into the world.
youtube
Cross-Cultural
2025-09-30T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyQtFmWreaMBi9vhU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz-DxTjvM0W06nx_LJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzC75vws_MrpvHLO-Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxuMRAJbwGFpUF7d-t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxrNeWkopmIO3xTOLp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvJ1hEMeyc1_O-kP94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxrqa4h7sHlLPTRJyJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJDI4gLfKuoWYFzlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwvaaMVX5EO7FkpDf54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwjyzai-PENCI4TJAJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]