Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First of all, this whole idea of “contributing to society” is a made-up con. It’…
ytc_UgxctfSLN…
G
How about we restrain them based on moral compass in a way, that worse actions i…
ytc_UgzUqbBJA…
G
It's not that it is super realistic as much as women who wear tons of makeup loo…
ytc_UgyKEiRfo…
G
AI will wipe out also the middle class and the low end of the high class. No bus…
ytc_Ugz-T1Br8…
G
AGI is not possible and that is not what we are building toward. The fundamental…
ytc_UgyExt2nR…
G
The obvious joke?
Are the people who think AI is sentient, sentient?
I have to w…
ytc_UgzBX7qBm…
G
All of these endless podcasts of mental masturbation about a technology that is …
ytc_Ugwm_HK6Y…
G
You make a lot of great points why you dislike AI. The environmental impact espe…
ytc_UgzPYfvNR…
Comment
This video starts with two logical fallacies: AI isn't going to destroy jobs because it hasn't done it yet, and people have been wrong with similar predictions in the past. Neither of those mean that continued advancements in AI and robotics can't or won't replace jobs at a massive scale.
But yes, there will also be a great deal of skill shifting that keeps the agile employable for some time. And some jobs will remain human centric, because we are humans and will intentionally keep it that way.
youtube
AI Jobs
2026-03-06T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyoo8NpgUStcLq3Pth4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyhwo9x5lu3I3dDkOF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxwTHqQlFAFG8Bmeph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfHnw9BEcoE28epT94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwbogzzfNcgA-ZMcZ54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4L5FET6BbyLyv9-J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwXIN0nQFb6r76n4PJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyTyq1-bckEilM3sXV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2CNtbZBNZrw5M6dB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxMdKaupBg_WtxCIJt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]