Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think this is so sad because you would think with all the arnold schwartznegge…
ytc_Ugz4G8aOq…
G
Truly amazing thank you for some new content😮 keep up the good work stay safe an…
ytc_Ugz4p1YhP…
G
on the other hand you can now go do porn and if anyone finds it you can claim th…
rdc_k7lr13y
G
Okay one it takes people's jobs. Two people just hate AI in general. Three peopl…
ytr_UgyWs8tut…
G
I don’t have an issue with AI Art since it can be used alongside human-made art.…
ytc_Ugx2UfEF9…
G
Adherence to WHO’s truth is the question. Elon’s truth? Trump’s truth? Most like…
ytc_UgwQRLG7M…
G
It's upsetting to think we've always said robots will replace certain jobs but o…
ytc_UgydzWir8…
G
Troy is perfect for him.
He gets hourly rate for the training data (all his voi…
rdc_lgtednd
Comment
Sabine is spot on about interpolation vs. extrapolation. It’s not just a software bug; it’s an architectural dead-end. Dr. Juyang Weng (who focuses on brain-like 'Developmental Networks') argues that because LLMs lack a mechanism for true autonomous generalization, developers are forced into 'Post-Selection Misconduct'—essentially cherry-picking the best probabilistic guesses to make the model look 'general.' It’s the 'Open Skull' method Sabine hints at, and it’s why we’re hitting a wall. Full breakdown with Dr. Weng here: https://www.youtube.com/watch?v=TFiMzCr1ed4
youtube
2026-04-24T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxych95YSxv3GnVBfR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwOtnFknPM-39aFoM14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8SAUbo1Qf80QpMWF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxTGLYokaL5Di4396x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyZWVBH_iP36oKekqB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKckKTzcxQDHabCCt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxhWdhrG0954tbtNNh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyBpf0BEoK5EoVNDeR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxWoOiTyQy1fj5-QKd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxATS5_U9O8XR2ek654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"mixed"}
]