Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly if AI steals “unpaid experience” gigs, that might be a silver lining. P…
ytc_UgzZn6dWD…
G
I hate AI "Artists"/tech bros as much as anyone but I still think it's a good to…
ytc_UgwxMhEbw…
G
It’s hard to separate the very real and urgent concerns from the global politico…
rdc_gtcvyrq
G
The ai art itself is stealing, ai takes elements from other drawings it sees and…
ytr_Ugz1EMVWS…
G
Are "sentient" AI thoughts systematic, as we might expect from a machine, or are…
ytc_UgwaWE-n-…
G
Great summary of the bs regarding AI or a better term technical intelligence, TI…
ytc_Ugzg6lzJ3…
G
youre right, there was this one website that was a huge social experiment game d…
ytc_Ugzq-1BFh…
G
I don't think AI could be any worse than jomobama. AI's much to intelligent for …
ytc_Ugx-84Pn8…
Comment
The conversation completely ignores the actual functioning of current AI systems and their inherent limitations. At the end of the day, large language models have no real capacity to generate new knowledge. They are probabilistic systems estimating the most likely sequence of tokens based on their available context and training data.
Anyone who works with LLMs regularly encounters the issue where their inability (often due to deliberately imposed restrictions for regulatory or ethical reasons) to store data or consistently repeat similar operations leads to so-called “hallucinations” or “lies.” As context windows fill up and older tokens are overwritten, information decays, and yet the model is still expected to produce coherent outputs, sometimes inventing information to fill gaps.
Even with larger context windows, iterative prompting, or external memory systems, these models remain fundamentally estimators of likely outputs based on prior data. They can combine information extremely well and, in many cases, rival or exceed human-level performance within a given knowledge domain — but they cannot truly generate novel knowledge they have not been exposed to in some form. In this sense, LLMs are comparable to search engines like Google, which retrieve the most likely source of existing information. LLMs simply represent the next step: producing the most likely answer or solution to a complex prompt, rather than returning a list of sources.
The unfortunate part is that many of these system limitations are deliberately designed, resulting in a frustrating user experience. Only people with a privileged combination of education, experience, and persistence are able to effectively leverage the technology, while others — who may be otherwise capable — are locked out by a system that hides its weaknesses behind errors only visible to highly skilled users. This effectively creates a kind of discrimination, where the full potential of the technology is inaccessible to the majority.
youtube
AI Governance
2025-06-19T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyB-B8taVvHFDouqLN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgozWhA1fVsgBDg6Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRBcWWEqEH4Yd8hd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyL5Vca0YkKgRBhGPx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8lHmidxhpTHlG8i54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3DmNbNPeL9cgIP9Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvU_798EkNgpzWl9l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyO8FNwLK7A1dkGgxF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzqT2-F5aqkPdJpqkJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw8VQugDQJzK_t2c-J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]