Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Remember when the MAFIA Ran the Trucking Business, into the 60's, 70's, & 80'???…
ytc_Ugx3cq5nS…
G
Ai augmented jobs will only go to top 10-20% in knowledge fields most. They will…
ytc_Ugxf1Qjd5…
G
First they come for art, next they come for tech, then they come for farmers, th…
ytc_Ugy4zGk5b…
G
If everything will be done by ai then how humans going earn and spend. Ai need t…
ytc_UgxKoUdxU…
G
Costs money. Who is going to pay? Also any plan would require that 3D printed ho…
rdc_deu5xqw
G
It would be logical to mandate that Ai super computers always be built with nucl…
ytc_Ugx36D-Sn…
G
The purpose of the video was to get everyone comfortable around these ai robots,…
ytc_UgxcSWPLl…
G
AI rocks! It is the most wonderful tool and it will be super beneficial for the …
ytr_Ugz7fCtvJ…
Comment
As a software engineer who went through a research-focused school not that long ago, and who is now in the industry, I resonate with the anildash article more than Nate's take. And even so, neither of them talk about the apparent scaling limitations / diminishing returns we seem to be hitting with LLMs.
Nate did sort of allude to the last AI winter being ended by an algorithmic change, but didn't then say we're ostensibly hitting the limit of this new algorithm/paradigm, and that the next paradigm shift could be a century away for all we know. LLMs just don't seem to be scaling their way to super intelligence to me. But like you, I don't know for sure and the future is definitely wacky.
youtube
AI Moral Status
2025-11-04T03:3…
♥ 66
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzdD362N-69jb_GqO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFKzdZ6IS3bSjeDGB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwK8vNHvAAC4qgyPZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi9ZyCrLQY6-3cWCF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHDlDtpu7Dv0PEtkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8TKA8OgiK9y0qax14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMJI7gRBEnkFgn6JB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwcNk_cuVklAe_4VVp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGrgrKNaUKIJiZ74l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUMsFWYfQOUsLfRIB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]