Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@antiricergtJust because someone elaborates on numbers so that others can interp…
ytr_UgzDR-pPW…
G
If AI powered robots can do human work, then from the perspective of those who c…
ytc_Ugx07-dbu…
G
AI won’t be fixing anything a trades person can do. I hope blue collar employees…
ytc_UgxGfK4HP…
G
Certain careers in the blue-collar and healthcare industry requires human qualit…
ytc_UgzUIjm28…
G
Sounds like fake examples of things ai never does. ai is not reusing your storie…
ytc_UgwXLumbm…
G
Software Developers won't be replaced for the foreseeable future, but AI can sig…
ytc_Ugw7lKd1y…
G
yall worried about the humanoid robots when we discovered they can grow living r…
ytc_UgyhVqHa4…
G
Hopefully I'll die of old age before all this AI stuff takes over. Just as a rem…
ytc_UgwgZAwOG…
Comment
There's little merit in second-guessing what the models do. The only people who can answer those questions are the ones implementing the algorithms. There are approx only a handful of classes of machine learning, the roots of which are supervised and unsupervised learning. These LLMs, no one believed it would work, it was only a stroke of luck that after throwing a huge amount of compute at the problem, that "something" of seeming value emerged. It was not obvious. In fact it is still not obvious at all as to how useful it actually is, given that the issues of disseminating problems using data have not just gone away. In fact, it's arguable that the problem of addressing problems has been made worse because what you want is a deterministic way of addressing problems, not more ambiguity. If anyone's interested in discussing this thesis, then let me know.
youtube
AI Moral Status
2025-11-09T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzp80nGxK9CsoXe9IR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6CiZrU7CjH12nGbp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxb20gerwtvIijJOnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz7dak4tEwMw_QaxQx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyl-PMxkbGbfbI_CBJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwxa1sSJblnvPHU9zZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBIg-7rjt8OsBnn9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzvxUZkuiyLj6TdWGZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzO2ARKFwURWNOC9jx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwjGHdRNxmsPElyw6B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]