Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Computers will only do as you ask so if you're showing the computer to stick to …
ytc_UgynlRSEM…
G
The US Congress will never pass an opt-in law in AI’s appropriation because Big …
ytc_UgxR8B4yn…
G
@CassyD-x2p yes, and i also hate the carelessness of cattle and crops both dama…
ytr_UgyARejrO…
G
No.
we shoudl stop using AI
me included
it lets me think less by myself
it m…
ytc_UgwabovXO…
G
This is actually one of my biggest fears, someone replicating my art that I've s…
ytc_Ugw2CrN5z…
G
In other news. AI will raise sea levels and cause the heat death of the universe…
ytc_UgwwjAzx7…
G
The day AI delivers my groceries, I'm going to Stop Tipping! Also, I bet Only Fa…
ytc_Ugx6NO_wl…
G
everyone looks like everyone tho. most of us have two eyes, one nose, and a mout…
ytr_UgzKY5Vry…
Comment
Narayanan is like a religious leader... "But you need humans for..." Consistency?!? Really? Has this guy ever used customer service for anything? Human customer service agents regularly suck and are quite inconsistent for most companies.
Also, people like him keep moving the goalposts regarding what AI supposedly won't be able to do... therefore they won't replace humans? Sorry, that's a shockingly invalid way to make your argument, especially for somebody working at Princeton.
The problem isn't people overhyping AI. We will have problems if AI has not been over hyped and we aren't planning for it well in advance with regard to reshaping our economic systems, and we may be totally screwed if we keep ignoring alignment problems...
It's possible that AI will just end up being a tool and jobs will shift to areas that AI doesn't dominate as well. That's yet to be seen. And I don't care about hype, but I do care about the lack of concern for potential bad outcomes.
youtube
AI Jobs
2026-04-19T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwPYmj2idqwZ86H3W14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgycfUViqxZHCV6qxL54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgzYBf3xTxbStVwePrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwXUHBFx7G5j-Jk9mp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},{"id":"ytc_UgzY8R4WWffLMpe0ymt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugwm9mP3BF9Dnnmk8EZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgyIPCUIttCboRkqxut4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxZ5izurTjWiV0FKSN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugypqbjn97y4iAOJdKp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},{"id":"ytc_UgyB7Ix2KykPMRlNs7F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}]