Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Except current AI is still unreliable, can't really do anything by itself and is…
ytc_UgziT482J…
G
Wait until the AI starts to be able to think and Begins to think it's better tha…
ytc_UgwtscNfe…
G
Don't think for a second that the WGA is against the use of AI...they just want …
ytc_Ugya0hkpu…
G
Man i need cheap ram and ssd for my pc build which I have am saving for 3 years …
ytc_UgxPqlDEk…
G
"AI should be used for content moderation [...]" (for protecting human moderator…
ytc_UgzGf0Nt6…
G
I started my job as a Junior Developer in December 2024, and the impact AI has h…
ytc_UgzUqXv3n…
G
I’m done being afraid of AI. It has helped me more than 90% of the humans I have…
ytc_UgykvJ-Gq…
G
Sorry, but this was too extreme to take seriously. He lost me with his simulati…
ytc_UgykP3n9t…
Comment
1:16:20 Counterpoint to the "Majority AI View" article: The engineers, PMs, etc who are tasked with taking LLMs from the lab to the market are not equipped to understand the technology. It's a very different beast. I work with a ton of highly competent engineers at one of the big AI companies with large conventional tech branches and the paucity of calculus or linear algebra knowledge alone creates such a barrier for them to deeply grapple with it. They fall back to reasoning about it with analogy to tools they're familiar with, like search, autocomplete, etc. The result is a pretty myopic bias toward assuming it will be like previous technologies.
I do think the alchemy analogy is accurate, but if anyone is the scientist in the room it's the people working on interpretability. The "tech people" who are working on deploying or fine tuning are not the experts whose opinions matter. They might be better informed than the general public, but it's marginal.
youtube
AI Moral Status
2025-10-31T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGCenfic0DffQynGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMwUrLPPKGZc7N7gZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxYTqk0c1AMEO-Cn0R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugzvezki_UIzKiot7-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwqT_qp2eypDr9Kwf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYZAS6C1uYHlECl894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJjyR6omrJ_AWUSwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTHp--dd6C17hBoY14AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyv3k5O2BLJBDFPWJN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz1YYOzpzTlkFa9XrV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]