Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also Ai art has vaule based on the person who sees it, you do not alone get to d…
ytc_UgzRP_8xL…
G
Completely disagree. People keep saying AI won’t be a big disruption because we …
ytc_UgzIONGRt…
G
All guys want to see a Cyberpunk/Steampunk version similar to the thumbnail made…
ytc_UgyMZ7sSF…
G
STOP, STOP, STOP, STOP
BUYING FROM A BIG COMPANY….or the government / 1% will co…
ytc_UgwNVZJm-…
G
This whole video is nonsense. Self-driving cars have a much quicker reaction tha…
ytc_UghHt-JHG…
G
Cool let’s just believe everything we read on the internet!! No ai model would r…
ytc_UgzEXw8s3…
G
@jmhorange > *It's not in the case of whether ai companies should be allowed to …
ytr_Ugz8bOCyY…
G
1) Optimistic Scenario — A Cooperative and Sustainable Society
Societies have an…
ytc_Ugx-l1-eU…
Comment
I'm not entirely confident that AI is as great as it's being pushed to be. Perhaps the biggest problem is that, having worked as a developer for years, I'm too familiar with the types of code that AI is trained on -- and if that's what's being used as a base for "correctness", heaven help us all! But I have had a co-worker who demonstrated to me how he uses AI, and I came away unimpressed -- it strikes me a a type of "search" that doesn't tell you the sources, making it difficult to figure out whether and why the AI "solution" might be wrong, and it doesn't provide the context needed to make it right.
Again, this might be the fault of developers, because there's a lot of "help" in forums and Stack Overflow that's like that, too!
Overall, though, I can't help but sense that you kindof have to be an expert to use it well -- because you need to be able to tell when it gets things wrong, which is made worse by the fact that AI is really good at making text that sounds right! -- and you can't become that expert by relying on AI to produce things.
One suggestion I heard, that really resonates with me, is the notion that experts using AI, when approaching problems, need to come to their own conclusions first -- and then use their AI to test their conclusions, and see if it brings up anything they may have missed!
youtube
AI Jobs
2025-03-10T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxud4LqcRrweQhTGEJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4WKaEYC4BRSFtAt94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw1eZvFyhad2orxFDh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzRChARNE5Ewn0Lg_Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXNQsjbsORlDwLghF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz07B8twbpWWcg9SoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUrBmxa2jgOY9jg554AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKsmRwKOGNll29n8V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbLlIry3bEXi3UO9F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyn_0Ezmp66TDp0OXB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]