Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Both are true, it's unethical AND ugly.
But also a legit potential danger for …
ytc_UgwAxOLfp…
G
predictive policing sounds like that sheriff watched to many movies and those mo…
ytc_UgwMKgaCQ…
G
And AI "Artist" calling themselves an artist would be like me calling myself a c…
ytc_UgzDF-p_t…
G
Based on the Things I worked on with AI, the Results for other People could be v…
ytc_UgxnHeL_S…
G
Well, putting attention on how AI art is bad can actually make AI so hated it di…
ytr_Ugz1lFxR9…
G
Predictive modelling tools are only as good as the data you feed. The criminal j…
ytc_UgzrdeD_e…
G
Ten years in the trucking industry. Feel free to burn me here I just feel like …
ytc_UgyEifFT3…
G
Stop worshipping the false God Large Language Models..
Stop an AI Singularity f…
ytc_UgxGdwCk4…
Comment
37:00 i have difficulty subscribing to the idea that we could not program it. The promise of AI is that you can, wit a structured process, without the help of a human, get a system that mimics the imput. I think the 'without the human' is the important nuance here. Programming is actually a very very broad toolbox. Creating a program that can crack (good) jokes is very expensive, and humans are most of the time looking for another human to tell them a fun one.
It is the generalisation that makes the AI worth it to have the downsides of it being 'close enough but not on a human level' that we are now using it.
But back to the 'is it worth it' in energy costs and human research, i think AI is currently not break even (so you could have used programmers to write similar things we now use the AIs for). Thats more stupid if you look at it that way.
youtube
AI Moral Status
2025-11-16T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwiDwqQmghfNorSdC94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP2jafTXxIURHXBUJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoOhSHyse9ukQlT3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwN0blEUkwAo1e-hcd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpYrkuQ6e-ReAXXa14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTESFdM2DK5EjbK5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHZAESu6MfSq8SHRx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxWlMeekmHPKrEzyZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzO3dN6W10P7RhAz0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfKopPnT7WM7I2b8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]