Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wish AI wasn’t discussed as a monolith-. There is no single ‘AI’. ChatGPT, Dee…
ytc_UgwpWN-M1…
G
Ofcos their is healthier methods, since when does that matter? The whole history…
ytc_Ugx03XW7F…
G
I have a question
I am anti AI. But I have ADHD, my brain is a mess sometimes. …
ytc_Ugzzyz-yY…
G
Ask John Searle, he's kinda the final arbiter. But FIRST, can your AI draw a ful…
ytc_UgxNHwuWF…
G
People need to have their own side businesses and multiple income streams. Also …
ytc_UgwADAF_i…
G
i have 3 papers 2 is black and 1 is white, you ask me what is the most common ty…
ytc_UgyWjAQ7j…
G
Somehow I don't have a capacity to care. The current economic situation is for m…
ytc_UgxsCBQoC…
G
Amazing video!
I love how you explain why ai is bad and how its detrimental to u…
ytc_UgzC7ojeO…
Comment
I think there's a different dynamic going on.
To get good code out of an LLM, it needs:
1. To be quite a large model.
2. To apply advanced reasoning on top of the base probabilistic lookup you reference
3. To have a clearly expressed set of requirements to reason against.
In the case of a co-pilot style AI application, it's expected to operate continuously, so large models would get exorbitantly expensive, particularly when you iterate around reasoning, and instead of giving it decent requirements, it's expected to guess by observing.
youtube
AI Jobs
2025-04-07T09:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzzM-HysThzS5_voFd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfqGCYzhVA8m_TZ054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVD5P4WrqujzjhDth4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwsitvLgcr-pE-NMjR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgtRuKJXafC0yaqxt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwPTP3RMMODtoL5HvF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy3T6FZxsf3crn2PUd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyEJndDxdcIrI8j1yx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFN8CxsX2IOcxcsbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDoruIKSe8FbDKP9B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]