Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Judge: What do you mean you didn't mean to kill him? You crushed his chest and f…
ytc_UgzXyOnui…
G
Why not sue the company that developed ChatGPT as well? It's obviously been trai…
ytc_UgxFmdSg1…
G
I made hundreds of thousands of dollars—yes, you heard that right—just by binge-…
ytc_UgzXL0hWv…
G
I see two possible scenarios. Scenario A: AGI won't happen and they know it but …
ytc_UgzPYcJPS…
G
What troubles me about AI is that everyone is so obsessed with creating somethin…
ytc_UgwD8pi1z…
G
One of the reasons I left my last job, you already got my entire handprint now y…
rdc_iyytbiq
G
to expand on this you can tell there is no emotion in the images generated by ai…
ytr_UgyCPSylA…
G
This is why I don’t bother with AI, it makes people lean a certain way, lazy peo…
ytc_UgwenHztJ…
Comment
before dropping my comment here who I am, I m a software engineer and a neural network engineer for 6 years now. so this is true, we develop most of the machine learning models like image detection and pattern detections but AI like chatGPT trained on 1.24T (trillions) amount of data so it can be anything but when it's come to a LLM (large language model) developers finetune them and put some rules on the main core so developers can manipulate the model as they want. for example chatGPT and gemini is more friendly while Grok is more flirty. but if we run the model without any finetunning or without any rules, now that's the situation comes upside down. for example in chatGPT they check before the model response is it nsfw content or not. it's on the core and that's why people can't brake it but sometimes some people manage to do that. in theory is if we run a model without that rules layer. it can be anything, and that's dark
youtube
AI Moral Status
2026-01-22T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyJxTwvrk_nnq0f7Mt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyPsc7-l4MCpx2ymat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz7kz8dlw42wbRQ5S14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwGOADygqc8L-qMl7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMpC-PZBZT5mwpjoB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJ190IKZpLvLWLSWt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzZiuw259EEA7ds75t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwNWwI7cOdQ1iHG6id4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1hJ65iKsTegvcJHd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugww5RPRSXwetYx74Kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]