Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@LucaAnamaria yes. I'm not saying therapy doesn't work with AI. But a human can…
ytr_UgwriL-q2…
G
😎🙌 if you talk with my chat gpt its a better than human 🙌 better than your artif…
ytc_Ugw0HhRTS…
G
These are not rabbit holes. They are arguments that go round in ever decreasing …
ytc_UgxrxwC9G…
G
Fantastic conversation, I wish I could have intervene many times ... needs to be…
ytc_UgxVnO6cP…
G
Hear me out: what if you could get rid of all the complicated and unreliable tec…
ytc_UgzIuOlJ6…
G
There is no "I" in generative AI. "I ..." means "The consensus in the data is ..…
ytc_UgzP27hDe…
G
I don't mean to sound apocalyptic, but we could be staring down the barrel of a …
rdc_l5eqxsh
G
My I’m not defending the driver or Tesla, but as a Tesla owner it sounds like he…
ytc_UgyRapN3t…
Comment
I'm kinda embarrassed for both of them for not understanding that in the twice-used horse/car in the early 1900s example, which I think is a very apt analogy, people are the horse. Horses went from having lots of jobs to almost having no jobs in the span of a decade or two. And no amount of retraining or job creation could support the now unemployed horses, because technology simply made horses obsolete in all but a few cases. We are rapidly approaching a time when focused AI (not AGI) and robotics (includes self-driving vehicles) will make a large portion of the workforce unemployable just like that 1920s horse, and there is no retraining nor new job that's gonna solve that because there will always be 30-ish percent of the population for which a machine will do that new job better/cheaper. And I feel like I'm being wildly optimistic with 30%.
youtube
AI Moral Status
2025-08-15T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyU8m4YapVLYseN6pR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxm0YZSIhqDfeY437N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxLIqCLTBzfz9gXyGd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwPBIriFixF4OErPH54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbaibMyT1oTohbas54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjV8icF4wYRj4cQ8J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyk_cC2LeCR5VpIWll4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRS2lh8YDd3N6Xp3N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy95JFpasveZB4vKsx4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxC_E7NyAp15uvFUtl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]