Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Einstein wasn't a robot because they aren't conscious if we stop using the inter…
ytc_Ugyff9hgq…
G
I wish everyone thought like this :(
I’ve actually cried several times because o…
ytc_UgxFrsNI8…
G
Not advocating one way or another here. But just correcting a point in error. AI…
ytc_UgxmN0hOg…
G
Uuuh, why would I play Conflict of Nations when I need to prepare to survive and…
ytc_UgzYe6bU8…
G
There are zero ethical issues with this. The AI is not generating a "copy", inst…
ytc_UgxEwGWy_…
G
Especially with how many movies we've made about "How bad AI is." It will know t…
ytr_UgzwsK6Ke…
G
using AI for customer service is good, but when it comes to complaints... AI is …
ytc_UgwbeEfQD…
G
I think the a.i. are much more like humans that anyone seems to want to think…
ytc_UgziQvlqc…
Comment
There's been interesting work replacing MLPs with KANs and getting the same performance with 10x+ less parameters. So you can run 10 agents in parallel for the same computation as older models. Or run one agent with a fraction of the latency. If American companies can figure out how Chinese models fixed context window scaling, those two enhancements will make today's AI look foolishly bad.
youtube
AI Jobs
2026-03-22T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgygH_cpjGesOEnXVcJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKo6ZlOPjDYnxS7b14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIo1whsSrYrb_shUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwT6IQheQxqLbX2e3B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy17BdUhqayUxql2-94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyAKZUr0rx193pIDah4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwT2o_zWGBAC4Av0b54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyKhHo9wC5CQV3dg9p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzUDfozkEAjmNbOVvx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyW8hy3YXvMsJZW9ER4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]