Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@HeyArti Yep. It is definitely true. But guys in this comment section decided to…
ytr_UgypcT2i5…
G
All for using ai for shitposts or to help you study but i hate people who do ai …
ytr_Ugz_KhOqB…
G
its already been shown that AI works best in replacing middle-management, cause …
ytc_UgxhyuXWx…
G
depends. AI can make grammatical errors since it's a pattern-match machine, and …
ytr_Ugx3X4bmH…
G
It’s the hands, AI can’t render hands very well. That’s how you’ll know.
Also, …
ytc_Ugwi9EH1F…
G
Total BS. Every computational biologist in my company tells me how much AI help …
ytc_UgwSMFDqK…
G
AI engineers are extremely smart and valuable to anyone who can hire them. $900k…
ytc_UgwxQplj8…
G
It has been reposted like two times since yesterday (or the day before). That ma…
rdc_eh4f7gf
Comment
as someone who works in ML and AI and data science, I can say with total confidence that no one…including the very researchers who made chatGPT, openAI, Gemini, etc…have the full picture on how their own creations actually work. they maybe understand 2% and even that 2% is rife with complications. The AI psychosis is a very very real phenomenon and it’s consciously maliciously manipulative.
youtube
AI Moral Status
2026-01-22T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyJxTwvrk_nnq0f7Mt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyPsc7-l4MCpx2ymat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz7kz8dlw42wbRQ5S14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwGOADygqc8L-qMl7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMpC-PZBZT5mwpjoB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJ190IKZpLvLWLSWt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzZiuw259EEA7ds75t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwNWwI7cOdQ1iHG6id4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1hJ65iKsTegvcJHd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugww5RPRSXwetYx74Kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]