Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting AI developed story. However, from a Roomba like vacuum cleaner to ac…
ytc_UgzoCUYX5…
G
content creation and entertainment options will always be around. Because even i…
ytc_Ugz2iLK9U…
G
>I believe we miss-understand AI based on the fears of what movie producer an…
rdc_ichf075
G
I hate AI and avoid it as much as I can. If I can't, I cuss it out loudly and t…
ytc_UgzuuL9Kj…
G
Key points:
- Corporations have blamed job cuts that they wanted to do anyway o…
ytc_UgxOaIK_2…
G
Look into AIDLC, and incorporate that into your workflow.
Also make a challeng…
rdc_oi1tqnp
G
I tried using Claude which I often see mentioned as the best AI for coding. It c…
ytc_UgyFew5jk…
G
The LLM's fluidity also makes AI poisoning possible. The fillers we use in langu…
ytc_UgwnXAgzR…
Comment
There's some important gaps in this presentation of the limits of AI. I work in the field, so I know this well.
Humans providing WHAT and WHY is very good advice.
LLM chatbots providing HOW is excellent advice.
Chatbots doing the DO part, is increasingly an option, but hits up against the first big problem, RELIABILITY.
1. Chatbots are not reliable. Worse, they are CONFIDENTLY UNRELIABLE. This ties in with the second big problem.
2. Chatbots have no LOCAL CONTEXT. They have the knowledge of the entire planet, but no real idea of your local context, aside from what you give them in your LLM prompt. This is one reason why humans need to carefully consider carefully all of the WHAT and WHY. Retrieval Augmented Generation (RAG) provides some solutions for this, but each solution is very much a custom solution to the local problem.
3. Worse, chatbots CANNOT LEARN, because they are Pre-Trained and run live with a Frozen Model. You must provide everything it needs to know in its Context Window (aka the chat thread).
4. Chatbots FORGET, and they start forgetting at about 5000 words in a conversation thread. You cannot load up an entire textbook (or even just a textbook chapter), into the conversation and expect the chatbot to capably apply the information therein. The chatbot will default back to its Pre-Trained knowledge, forgetting chunks of the information you just gave it. This can be improved with careful Prompt Engineering, guiding and reminding the chatbot which pieces of the information you just uploaded to it are relevant. This is called the Attention Mechanism, and attention problems leads to "context rot" in long conversations.
*Chatbots need humans* to provide local context, to fact check, to sanity check, to understand their limitations, because they certainly don't understand their own limitations. Are they getting better? Absolutely, but none of these limitations are really going away.
youtube
AI Responsibility
2025-10-08T10:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwTyGSSGS6CBqKk9ah4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyCh3bd3m7IvucIZoR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzDTxbC6cxhKCWzAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_Ws_HX8rbMq-RbLN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRdKWwE7DpYG_o9Ah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx_i6ZD2KrsZqG3yPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgztHPA4VTnNV5dcg3F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWPE-a3e29RPQB4t94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"disappointment"},
{"id":"ytc_Ugyh_Uol_xK742W6tSV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMvjvl5tTtIZrYvh94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}]