Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like this is AI interviewing the United States right now! 😂 #maintenance …
ytc_UgxBLj3el…
G
"if you ignore the environmental impact of AI"
Well kid, I found your problem!
…
ytc_Ugy-GuHQF…
G
They don't behave the way we tell them to because they copied us and we don't ei…
ytc_UgzFBBS30…
G
Ah I see so the big wigs do understand that AI stealing copywritten work is wron…
rdc_nhxm26z
G
>Anyone using AI daily for programming work knows they will get burned very b…
rdc_n4d4rw0
G
I could imagine an AI having some difficulties hiring people through Fiverr sinc…
rdc_o3gwvwe
G
I wholeheartedly support and agree with what you're saying in this video, althou…
ytc_UgyggL9dp…
G
If you currently work in this field just leave now! Or start looking because AI …
ytc_UgxQTmAdq…
Comment
you guys have a real zebras v horses problem here. all of these problems are explained by income inequality. Weak demand leads to layoffs. If demand were higher, the hiring in the past would not have been overhiring. To the extent that the 'freedom dividend' would work it would be because it would strengthen demand. I'm in my early 40s and I've seen multiple waves of 'learn to code' type calls for retraining.
Yang talks about LLMs as if they follow moore's law, which pertained to transistors. The fact is that LLMs have mathematical scaling limits, sometimes referred to as the efficient compute frontier, and the big problem that they are limited to the nature of human knowledge and a given cross section of that information. I would love to hear a conversation between you and the fella from Welch Labs. A discussion that intersects the economic perspective with a much better informed technical perspective would be much more interesting then giving a new grindstone to Andrew Yang's old hatchet.
youtube
Viral AI Reaction
2026-04-24T12:1…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwUa_zcPWGeokhQn0F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxOzKci4yqJI2tWzlx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwqR7O4rGeqRPlEtuV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfNqKn59bCXXaihBl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzsrn0PtATK87V3l594AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTnsUbWHZMB1uPnkZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzetxzSd7mzfRJxzjt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDCvI5prm5bX6lZWl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgytmkhMVGRQytkyjLx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgynPdkO_t28TPPZ2954AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]