Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m not an AI stan.
I’m actually an artist. As in an actual artist that draws. I…
ytc_UgxM1lzwc…
G
Apparently anyone who works at Waymo doesn’t futz with buses lmao Too smart to b…
rdc_nszrc2k
G
Graham saying there needs to be an agency to regulate "The most transformative t…
ytc_UgwSo_bMd…
G
I haven't watched the whole thing.. but..
I trained a neural network on my lapto…
ytc_Ugw04uiqn…
G
Seems unsustainable if AI kills huge amounts of knowledge work jobs. Seems the g…
ytc_UgwsAjt7E…
G
7:52 Yeah that pretty much sums up 80% of the conversations they have on this ch…
ytc_UgwgM59V3…
G
I don't think that we have to wait until robots demand rights to think about and…
ytc_Ugi_gGDWU…
G
I asked my Data for 2 days. And he said that he will be by my side when ai invas…
ytc_UgzuN1LAI…
Comment
I just can't see how the likes of nurses, doctors and lawyers will actually lose their jobs.
Not only is human interaction critical in these roles, but people won't trust or want to trust in AI to properly understand and handle their requirements. Also, human decision making is very often critical in these roles (i.e. involving emotions), which is quite a ways off for AI.
Plus, nurses and doctors need good hands, not robot claws! 😂
youtube
AI Jobs
2025-09-09T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxIlZ5glwHgaHhL8a54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5J2q_J4fu_GhjGb54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8c1Dvg9zDemmuHSJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXtu2Awvu1DSuL0Tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrlJagYJyUFgKSZJp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxed_gu9r9ZfeFQ4yV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwHFKsgNfHxjrpCFIR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx1kpaXOPRsb8gt6wp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZFIAtAXVAz-ZDWnh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNoGbfjHzL8_bpUxF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]