Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think it's worth mentioning that luddites were trying to protest their jobs be…
ytc_Ugwmaepli…
G
This is such hyperbolic nonsense. Llms are like fish in a fish tank. Without the…
ytc_UgwVHAiRB…
G
If "I sooo want to be a robot" was a person.
OMG!
This Subhuman will tottally de…
ytc_Ugz8z66JY…
G
Bro, they can't even program an AI to /not/ get completely red pilled and based …
ytc_UgxLW73Ks…
G
Personally, I think that people should approach AI tools differently than they d…
ytc_UgxPDqMxp…
G
tbh i only use ai art to draw myself or stuff that i need to draw for myself, i …
ytc_Ugxh5Fr30…
G
Another problem with AI is that because it takes everything from around the inte…
ytc_UgzR5J0PN…
G
Reality check. AI is here to stay. Anyone that uses it knows that it can only re…
ytc_UgyGcK7xc…
Comment
AI is a magic show with no experience.
I have always believed it would be a helpful tool (at east that is what i have built for the past 40 years that people would call AI).
It does not replace it collaborates.
But, after seeing what the smart phone and social media have done to society (or what society allowed them to do) I no longer believe AI will be the great collaborator but instead the quicker path to "idiocracy."
But every time companies create products where addiction and profit are the main focus and no one is saying "how will this effect society" we are going down the wrong road.
It is still possible to make LLMs stop lying and teach it morals but that makes them less addictive.
What would be nice is if AI would instead of showing reasoning it would return a confidence level on what it is telling us.
LLMs can be shown that saying "i do not know" is better than making stuff up and we have protocols that do that today.
youtube
AI Moral Status
2025-11-08T17:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDmo18c2vvdm1yQ7h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"respect"},
{"id":"ytc_UgwGAlQGZLoSE-kNHEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0pkUTj6ztRmqe7uZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzwMdCLcVnMTJGqkut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKSjjPDLtSP49LfhR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5TSj3WYtiZAakzZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-pyFjAE_0WygVeJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxMQUrhvcX5Pv4ODC14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwDtF7IlUnNsyMGMSJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLyuIC0e67JM9LqrJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]