Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Many say AI will replace the working class, as if that’s automatically a negativ…
ytc_UgzACx_v2…
G
Facial recognition was [trialed in UK cities](https://news.sky.com/story/met-pol…
rdc_ewu8f96
G
IF SOMEONE KILLS THEMSELVES BECAUSE OF CHATGPT THEN YALL GOTTA BE THE DUMBEST PE…
ytc_Ugw0mR1_S…
G
"Not anytime soon..." says the guy who's job is about to be replaced, just like …
ytc_UgwtowQMY…
G
this is just like when i sent my best friend my search history that contained al…
ytc_UgwiGMJQ6…
G
The filter can be broken if you know what words to use and at some point the AI …
ytc_UgyWdWxck…
G
AI is currently learning from text but when robots like neo get rolled out they …
ytc_Ugwtqoa3m…
G
I would say this may be true in some shape or form. LLMs are generally great at …
rdc_n248dip
Comment
LLMs democratically reflect human nature(it is all linear algebra and stats). If AI turns out to be evil then it is only because we are evil. We train it on our thoughts and deeds, and then we expect it to behave "saintly".
I run LLMs locally. I have the same models in censored and uncensored versions. The censored versions, when presented with tough prompts, will just return with "Sorry, I cannot help you with that.". However, the uncensored models have no problem helping me make illegal substances, coming up with plans to steal money or how to get rid of a body.
Here is what you need to understand. Both versions are trained on the same data, but the censored version has layers of safeguards blocking certain responses.
So, why do I use uncensored versions? Speed. I use it for coding, especially auto-completion. Without the safeguards some of these models can run on a Raspberry Pi without any GPU support, thus less power and resources.
So, what is the one risk? There is no way AI developers can manually filter out all harmful input data. Too much data. They feed it on everything...the whole internet. If that means documents on how to make illegal substances or plan the perfect crime then it is already in there. They then spend a lot of time coding filters and testing.
It ends up like "I know how to make a nuclear bomb...but I won't tell you how to build one.".
When we reach AGI/ASI then we just have to trust the safeguards that the system won't develop some evil system.
Why do developers cry "danger" but still keep on building these model? Money, power, sense of achievement. When I was in school (1980s) I like to build explosive devices and rockets. My friends wanted to share the same experiences. I was safe, but they were reckless. I only stopped sharing information after a 3rd friend almost ended up dead. The question you must ask from me is why I did it in the first place or kept on after the 1st friend ended up in hospital.
Conclusion...AI development won't stop. It has already escaped the lab. I can write my own code and fine tune existing models. I have a financial backer that will fund serious millions if needed...and this entity is not connected to the AI industry.
I might not have evil intentions, but many others do.
youtube
AI Moral Status
2025-12-15T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxu3_8ET8cLwvog0Tp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjJ6EUSheKUNHk8bt4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxM23xiuplqiNxwqWx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz9zUH8u61r9NyT05V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzhprfx2Nvsgm0JJzV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBS1cr0Wd1Yucs_Nt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmndxLHXkbwIenMB94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwUy9R71BkM5qboXm54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxY2JByrJJhxRFZ_gp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6ardXyHMKhGtmb4h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]