Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Human control is arguably the most dangerous aspect of an autonomous weapon syst…
ytr_UgyXvOkou…
G
Funny enough, when I tried to summarise the above with AI, the chatbot just tell…
rdc_ktx0d3d
G
You know why I don't worry. They tried this with self-checkout. That was inside …
ytc_UgwE15AUf…
G
I don't think you've got the right take on this.
Sure "anyone" can give an AI a …
ytc_UgwRDoxaI…
G
Ai is currently gray zone of anything trademark, copyright, patent related. Its …
ytc_UgyqnykFA…
G
Who in their right mind would give a robot a gun? good grief what's wrong with y…
ytc_Ugzw-kg6j…
G
Ai lies.
That’s everything anyone with two brain cells needs to know.
The indivi…
ytc_UgyXmRecP…
G
>No, you don't, because it didn't happen, even though most people using ChatG…
rdc_my5qece
Comment
It's absolutely hilarious to listen to people talk about LLM's who doesn't know (and probably doesn't want to know because it's real easy to look up..) how they actually work. What transformers are. What neural networks are. How tokens work. How text actually is generated and why it generates the stuff it does. How the models are trained and so on.
People seem to believe that this technology somehow knows what it's talking about. It doesn't. It has absolutely no clue because it doesn't have any form of will or consciousness or brain. It's merely trained with books, comments, papers and so on. Billions and billions of differet texts. It literally guesses what the next word is through statistical mathematics. I will never have free will. It will never be able to actually "think" or "know" what it's actually generates. It's an echo chamber of the user. You ask it conspiratorial questions? You will get conspiratorial answers because that's probably what you want and it calculates that through advanced math.
Relax people! 😅
youtube
AI Moral Status
2025-07-27T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuUNRPaQTKh3qZu5l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKvcYD6XiNbV72Wul4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzmifoy0yZpQYybvaJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw_vzFCR-8_sQJ7s_R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_J9yyD7jARl2pPW14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRTull2yWNR-JJchN4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyzD8zEHGEkX3nekiB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwam1i_nAoN1DiWVJx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxTmiQAB9psngDJibJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzrWXV6GYKSU10PlxV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]