Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought they were going to talk about the fact that almost no AI generated art…
ytc_UgwzuMKJe…
G
The world is run by total ba5tards so we know the worst things are happening and…
ytc_UgyP1Shl0…
G
Genuine question that might spark a war; Do we give sentient AI rights? (Would l…
ytc_Ugzmh692e…
G
7 states are building massive AI databases. They are consuming farmland, power,…
ytc_Ugwp1o03m…
G
We need a global uprising against this s***. We cannot become slaves to corporat…
ytc_UgwPqgm3s…
G
Teach AI to like pets and all will alright, for a sustainable, cute number of us…
ytc_UgwiPFmXM…
G
The guy the last robot is based off of said he didn't consent to this and didn't…
ytc_UgzRKb1JJ…
G
Hey we're gonna lay everybody off but we're having trouble coming up with ways t…
ytc_UgyTOkqfq…
Comment
Current AI suffers three critical shortcomings: injection, forgetfulness, and hallucination. Injection is when a prompt, designed or otherwise, creates bad behavior in the AI. Forgetfulness is when newly learned data pushes out existing learned data. Hallucination is when the AI just makes things up without a basis in reality. Based on conversations with academics (mathematicians and applied mathematicians), we have no idea how to solve any of these problems, and likely won't for a long time. These problems may be unsolvable until we achieve AGI, the feasibility of which is still up for general debate.
I think with these limitations in mind, we can see that AI will be a powerful tool for making work faster and easier, but it's not on track to replace any specific job role. My personal experience with AI tracks with this: it's great at simple tasks that fit into a larger vision I have, but it's often incorrect and requires me to personally oversee it. The tasks it's good at speeding up, actually typing out code, are not where I spend even 20% of my time. The tasks which are important to the business, which is generally described as "figuring out the right code to write," the AI has no chance of helping with because so much of that task is locked up in working with humans and research on topics that aren't yet settled.
When I consider the extreme cost of current LLM-based AI, I also don't think it will be a viable product, either. Processing data through an AI is expensive. LLMs are enormous datasets that can't move outside of hyperscale datacenters because of their size. Their current popularity is largely subsidized by VC money, but that will dry up and eventually they will demand the business stand on its own, with returns. When I consider the cost being sunk into building or even just operating an AI, I don't know that it can be economically justified against the marginal efficiency improvements it yields. It may regress to being the domain of a specialized role, or more likely smaller-scale specialized AI used for specific tasks (think computer vision, navigation, etc).
youtube
AI Jobs
2024-01-15T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy1LM4zkmqFK7TqapR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNg66GI5c-siLY0El4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9o703oT9jp-W-lCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyo3J3fI3Y-p6p5r7h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyoOSUhb3j0QN1uYAV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAzxpCT-E6KCTYhzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwb8gt1ZZC-0kmvfpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5VxK0LvTfi53LqjJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHLt5lEmE3QrE-b8d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxrAEuuJuWk5MDfcNd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]