Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
doesn't this just make AI even more enticing? AI would never strike and is proba…
ytc_Ugyr1bC0_…
G
Id hate to be arrested by the first generation of robot cops. It's not going to …
ytr_UgyMQZ2c3…
G
No, those who forecast a specific timeline for AI achieving ASI or proclaim it w…
ytc_UgzHbyHr8…
G
I personally see a danger in human dependence on AI like it's some kind of oracl…
ytc_UgxNAbd8K…
G
THIS VIDEO IS BASED ON A FALSE DICHOTOMY
It asks us to choose between two, and o…
ytc_Ugz1PDBCH…
G
Yup, this is what public school used to be like… Back when the states ran educat…
ytc_UgxN7Cyiq…
G
I like the idea of a robot tax. I am wondering how that will be enforced? It is …
ytc_UgyWJn38U…
G
me:
> ChatGPT, please congratulate ChiaraStellata for her amazing work!
cha…
rdc_jdm2d68
Comment
This feels like denial tbh. Comparing AI to electricity or the internet doesn’t really work, those were tools we used. This is closer to building something that can actually do the thinking.
In past disruptions people moved up to new jobs, but now it’s going after knowledge work itself. If something is smarter and cheaper than humans, it’s hard to see it just stopping at the “old jobs”. The usual arguments like “it still makes mistakes” or “jobs are up” feel weak. The progress is fast and compounding. What it couldn’t do 2–3 years ago it can now do pretty well. And it’s not just random people online saying this. Some of the people building it are the ones warning about it. Meanwhile companies are spending huge amounts on this. That usually points in one direction.
youtube
AI Jobs
2026-03-21T23:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzO2zufOYAIFRG3yS14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxhmO-krO5YeK75urN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUyf-H7lhVypV9oJJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0D3ObIoo13-CWy1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_47Gzbha-YZzATAl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyNQvgWH2MR1NIAXN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY4szEBcabbVP9Gb14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxO1V28Rx1RovYM9kh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgznXXoL0aB-uTImRYt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwrIlev8-wKS2LGZl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})