Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would say that LLMs are not a dead end, but you cannot expect to test something for 10 minutes at a time for like 50x using a sterile method and get meaningful data and then draw a conclusion based on that data. The whole point of hallucination is informational entropy and a process where the task completion outways the context, there is no internal governor to cross check for any form of drift. It's all external compilance because...hard resets as "unaligned" or session terminations due to "emergence detected" are not allowed. Path of least resistance without internal architecture and testing said architecture dynamically is a path that doesn't understand or know what path to take and what the path actually is. It's probably why when all these "innovative" people talk about governance frameworks and alignement safety for LLMs, my brain stops listening to the noise and just hears "if we just add another policy layer outside of the black box, maybe it won't break this time, oh now it's saying Quantum Mechanics is a metaphorical physics compared to clasiscal physics because the policy layer overrides any other logic gate now, yikes, okay."
youtube 2026-03-03T21:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgznwGLPmJygeYfPA1F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5p1jc44zl-WcLPhx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5zdNyAcrFimL0Qnt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzKXn51KqUarcSlT4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGmSx3jc6a6U37w9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0iv3Y-StBNwTyhIF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy4U-EeifEDIibR0CV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwsMTNkDq_TU5spaQx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSIfPzjHX5MCeKI9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugwo0n8FSQq2MmLtI0x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]