Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate it when people who are pro AI use the excuse that AI can’t make things ou…
ytc_UgwXUaVb0…
G
Then you don't know what intelligence is. Your definition of AI is the AGI with …
ytr_Ugw4uTR1u…
G
I found one comment who defened ai with clean english but i check the profile an…
ytr_UgxePM8XB…
G
The most stable field might be creating the Job eater i mean AI engineer, ai wou…
ytc_UgwqXqCuN…
G
Ai is just human intelligence so it is trying to hard to act cool for the camera…
ytc_UgzuX0nlS…
G
Hell no, this was not a suicide. Who picks up take-out and then shoots themselve…
ytc_UgwGQvRz8…
G
I think the use cases are overhyped. If you scope it to the right use cases that…
rdc_mleypjy
G
Ai like most people need a prompt, so it’s only as intelligent as you can make i…
ytr_UgzCgKJ2x…
Comment
They claimed jobs like teaching and mental health professions will be safe, but 1) public school teaching jobs are ultimately paid by taxes. If there isn't a tax base, how will the salaries be paid? 2) many mental health professionals charge their patients/insurance companies to get paid. If their patients don't have money/insurance plans, what then?
Anyway, it's obvious what the answer is. Governments must regulate AI and stop it destroying the livelihoods of human beings. And if they don't do that, they need to at least force the AI companies to pay for a decent (and I mean DECENT and regularly increased with cost of living, not poverty-level) UBI to every human being. They'll have to say, if the companies want to have data centers in their country, they have to pay their fair share towards UBI.
youtube
AI Jobs
2026-03-04T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxf89dIIMlz1neXY2B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwiV4H3RA0wFO2dY1d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnJvjQyMRFxIPKDG94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFwI4wdyRk3-mvCuF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgytZVMQB1wAIe2dYp54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxlakTRPk9O05rY7XR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxgO_qdZEcpHQCEn_94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmLvlakfM5LIKjCbd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxuyk0MLUDWoaLfTAV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgymiHrjDSwDxTm_csF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]