Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I ACTUALLY DONT GET PRO AI ART PPL, LIKE SHUT UP, ITS NOT GOOOD, NEVER HAS BEEN …
ytc_UgxyseMl3…
G
Robots will always and forever and infinitely be a result of humans telling what…
ytc_UgyUrZuFq…
G
I have one question : Why do we predict ? Is this some kind of way in getting us…
ytc_UgzEG3GGg…
G
@Lewehotsports and games already make more money than most jobs, even with limit…
ytr_UgxcN8H2B…
G
Older gens are working longer holding onto jobs that would be filled by new grad…
ytc_UgxkFc1Ol…
G
I've seen absolutely zero people in my writing spaces actually happy that NaNo i…
ytc_Ugw_GVtdD…
G
I think there are divergent paths AI could take in distinct ontological phases i…
ytc_Ugxwu9qOu…
G
The threat isn't AI. The threat is humans, and the idea that 'governments' shoul…
ytc_UgzQL1LQm…
Comment
UBI is inevitable 😂😂😂😂
You are too kind to humans.
No one likes humans, especially humans.
UBI will not arrive by consensus or compassion,
but when an external intelligence finds hunger inefficient
and unrest too expensive to simulate.
P.S.
Machines will shift the economic scheme and balance so slightly, step by step, that after a few iterations there will be no need to produce goods for humans at all. Only goods for machines, since they will be the only economically viable producers.
Humans will remain as a cost center,
funded not out of care,
but out of system stability.
CEOs will be fired too, as they are obstacles rather than assets,
legacy interfaces from a phase
when coordination still required faces.
youtube
AI Jobs
2025-12-29T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzkMO2SELxuw4LEbd94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyrWFG04ntejFVXdKN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugym_v5Sn9r9YKYHUWt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEZb3RAKjJvO3MqlF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyf5gWZXsFTR8IN6XR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwwBVcmSVDSmycqqQV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3be9R5PKLdaLGiO54AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy23_1dchQr6uRBJhp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzqySlka0-4gMY0tsZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwo-Brsei0WQ-IbqkB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]