Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This poses another question. If a robot with free will murders someone who is at…
ytc_Ugh-PGzxf…
G
Ils vont supprimer tous les services et les emplois SAUF tout ce que leurs appor…
ytc_Ugw5JM2Cf…
G
I only all the money paid to govt employees was managed by shitty AI. They would…
ytc_Ugz5YcOqr…
G
AI and robots are really two different problems, robots being the harder one. It…
ytr_UgwYSjrR-…
G
AI can make any kind of learning very easy, but... we have make sure how kids us…
ytc_UgxSRBXpX…
G
In my eyes, AI should only be available for the people that help like doctors, f…
ytr_UgzIeI6d-…
G
If we have deep fake apps i am sure we can have defaking or detecting deep fake …
ytc_UgzQotxrp…
G
whats wrong with using ai art for yourself not stealing anyone's art or copying …
ytc_UgwTdOPIj…
Comment
And what about this? Everyone’s afraid of losing their job. But do you really love it that much?
People keep talking about what AI will do, but the problem is much broader. Maybe we need to redefine the complex meaning of life itself.
And isn’t that what humanity has always been moving toward?
We don’t actually need “work.”
What we need is the ability to survive — to have equal access to resources.
And the planet’s resources are abundant; they belong to everyone.
So the real question seems to be something like this: Do we want to keep competing endlessly for every crumb,
or do we want to cooperate so that everyone can live as well as possible?
What’s becoming increasingly important, though, is this — do we want to live better at someone else’s expense?
Because if that’s the case, AI might end up highlighting those differences even more sharply.
youtube
AI Jobs
2025-11-05T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxHRnQD4nCRInUjzdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZx-kZ5lVM1P_lC-54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzrisYVMScT29w3cW54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxyXl2A7TGELbAbQ8V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyWXpJhmPnLKwptE2R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz1MgWq9_bOja7uPlR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz3mYL3Wd0vpaxLTD54AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyTBHU6TPkGlABeEhV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxTmM_De2y1jBFdcj94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNzrnsnkJo79PqXm14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]