Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Two points: 1) to solve this problem, use AI. If it's that powerful, AI will pro…
ytc_UgxJ0V5NK…
G
You can absolutely blame AI, this is the psychology that genAI was designed expl…
ytr_UgzdLrx4K…
G
Yeah I dont like AI.. most of the time.. Angel Engine is my favorite AI made/AI …
ytc_Ugyn4sN7Y…
G
To be honest, the real problem is AI are created to evolve, (so in one moment no…
ytc_UgyqOYDpg…
G
There's a lot of philosophical arguments here, when these AI bros only do it bec…
ytc_UgyEDOocM…
G
I always thank ChatGPT cuz it helps me so much w my work and it comforts me too!…
ytc_UgwX2uhnL…
G
YESSS that or it will start repeating itself without addressing what you’re sayi…
ytr_UgyrK2MgM…
G
I agree with the ai on that one, get rid of Africa, that would solve all world p…
ytr_UgxbdrOCN…
Comment
NONE of these systems are safe to deploy in to commercial or government or military services. They are friendly front end language processors with context identification analyzer code that integrate data search systems and amorphic weighting variables to produce organized subject responses. They become easily confused when presented with overriding or structurally complex and nuanced data, which causes ghosting and recursivity creates hallucinatory effects under random circumstances that no one has any idea how to correct for... because there is no actual "nuance intelligence and hyperbolic reasoning" behind these language processing systems. So deploying this tech in to areas where they are given autonomous functionality ... robot drone armies and policing and so on ... utter disaster.
This isn't a dig at Anthropic specifically ... This is a universal set of phenomenons across all platforms and even non LMM design models. KegsBreath and Orange Mussolini do not know anything about the "artificial" part of these systems ...
youtube
2026-02-28T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy745KW0bwYwXPOkeh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzkSGCDmiAk8-rJqm14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzD5yoHMr7EsRS3wTN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3feiKK9KBfS9LBBF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXoJBy1OZU0JvOdKR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwC8dxH78M2d20gxmh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFS68sh-r2X22kRpN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxmZujFJdcwUuAaTmp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwMFqmFBbpQUhoOPJZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxoIJkd3D4g6APxhoh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]