Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And AI is the best argument against the oligarchy and unfettered capitalism. C…
ytc_Ugygvg2GZ…
G
Yet code is mostly transparent. But some day, if things go on, you will tell you…
ytc_Ugy5jMNqe…
G
If the EU already has regulations on AI taking jobs in healthcare and education …
ytc_UgyBxwf47…
G
In 2025 I have had to be on the phone dealing with A.I. and every single time, i…
ytc_UgxxLJf7h…
G
Automation didn't create jobs for the workera displaced in Detroit by the car co…
ytc_Ugy-qoiAO…
G
Bro, I’m so sorry, pro ai people are maybe the funniest breed out there. Even wh…
ytc_UgyUjqH7K…
G
To anyone who says "AI is just made to make things more efficient" You should re…
ytc_UgwPyx_TG…
G
AI will firmly take the place of humans, and bad people will multiply, increasin…
ytr_UgwpPttvW…
Comment
I think you might still have stable jobs with doing things with AI that increases quality of work when junior devs use AI to improve. AI isn’t continuously learning (Turing-complete) or agentic (recursion doesn’t work because gradient descent manifolds are flattened, leading back to a Turing completeness problem). I like how Andrej Karpathy describes it as an artificial spirit rather than an artificial human intelligence. It’s a hollower structure of intelligence that is still highly useful when us humans with more complete intelligences learn how to use it to improve work. That said, the superintelligence scenario is worrying if model capability goes up in this way..I’m a skeptic of that with the current RL and transformer + energy scaling paradigms.
youtube
AI Moral Status
2025-11-06T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVF3XPGOawS-54AOx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_O5NAfCuhi_69hG14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrH4v7YnVgcfw8VAh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-uGju0uiNmQGQ5EN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGI1fCaYO7Ssoou9l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2nnMGueTMgcUg_iJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrNR7UCeFwc30YfQR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzthlLbXFc2bC1VB7l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkDVrUfI2M5eQyJ1R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwdKFaUZPEp9dUAmed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]