Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe by "artificial intelligence" they meant humans that aren't very smart but …
ytc_UgwT5mjgL…
G
Ok stupid robots should not take our jobs so stop trying, just cause your rich w…
ytc_Uggdqnahc…
G
That's a thoughtful point! The dialogue highlights the balance between AI and hu…
ytr_Ugwq2LS4-…
G
Yep. But like a lot of things fear of the future drives the question popping up…
rdc_kyz0h8b
G
AI is racist, lmao, but doesn’t this suggest that it’s just more mechanical and …
ytc_Ugy1CBpJP…
G
There are parts of the human brain we don’t understand, AI will be that understa…
ytc_Ugz2Ag56U…
G
Well chatGPT clearly doesn't care about humanity. It's cold calculation. That's …
ytc_UgyZOw-tE…
G
First weed fried your brain like an egg. OK. Sure.
Next it was video games tha…
ytc_Ugz1ch5jh…
Comment
If AI were to try to solve the many major problems humanity has created, it would quickly realize that all other species live their lives within relatively small, natural cycles.
From a purely logical perspective, the simplest solution might not be to fix those problems, but to eliminate the one species responsible for causing so much harm.
And if the AI were merciful, it might choose not to destroy us entirely, but instead force us to live with the same small ecological footprint as every other species.
youtube
AI Governance
2025-11-07T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxrQQ7u4xRx0nz4SZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_X9xv90fLq9QvY4p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxGAd5Ea9ClHY2Xf894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz2Mzky-osL_Mhlq5d4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxVZlBooyGhOpAxxZt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZYCfqXudsjEAaLtV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgymL8xEE2s_zyf0Vy94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAZU4G0vlw4_fI0LR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxX2EFNVbdVydXD4vV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyRSDw-kLe-FBoglOd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]