Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro. Just practice and stop coping. Lmfao. Ai literally causes pollution. If you…
ytr_Ugz3abiIw…
G
Musk has been using the phrase Full Self Driving for a decade now, and at no poi…
ytc_Ugxy74FCW…
G
Holy fu-
I just can't FATHOM how people like them actually exist.
also 17:50, …
ytc_Ugyrmtsvw…
G
I really dislike this point of view so destructive! Let’s take AI like a great o…
ytc_Ugz_cepW9…
G
"Closer and closer to Super AI."
meaning.... they've already achieved General AI…
ytc_Ugy7SslEk…
G
Neil should NOT be talking about AGI as if he knows what it will effect. He is s…
ytc_UgwpRrdZs…
G
1:26:45 I think he has demonstrated the "subjective" aspect, but not the "experi…
ytc_Ugwox9CIg…
G
In Teslas misleading 'statistics' the highway vs everything is just top of the i…
ytc_Ugze2053b…
Comment
Actually I prefer this model, AI led and assisted model than hiring and firing culture so prominent in the Tech industry especially in America (believe it or not in other developed countries the laws are created for the employees and not the employers and people don’t get hired and fired as much as in the U.S.).
While it’s true these Tech giants for some I worked for shouldn’t be solely relying on the AI technology, without AI the company or humans for that matter cannot operate more effectively. And AI can learn. I have implemented some AI enterprise systems in Fortune 500 companies. These AIs CAN learn. It’s utterly ridiculous that these employees are complaining “they have to babysit AI”, well if they have to babysit AI while in production they didn’t test the AI or the entire system itself properly and rolled into production without enough testing. It’s the way they implemented AI is the root of the problem and not the AI itself. And Google complaining the AI made a mistake? Why the heck did Google create such non-bulletproof AI or mistake prone AI and give them such a huge task?! AI is not to be blamed here. It’s the people who made the decision to put non tested non-bulletproof prone to hallucinate AI into the production. Test the damn system and put it into production.
And the testing of AI phase is when AI’s mistakes need to be corrected. And by correcting AI they can learn and improve their mistakes. It’s not just by feeding some test data sets. They have to be corrected multiple times by HUMANS for them to get better. If they had to babysit the AI it’s not yet ready. Test the damn system before putting it into production.
And I do prefer my life with AI. Would I feel sorry for the new grads who used to get spoiled treatments by these tech giants? Nope. It’s a great chance for them to either stick around because they love the tech so much or move out of tech. We would get more resilient talents who won’t break down and cry or quit by the slightest obstacle like the most youth these days. Like we don’t exactly have easy time but I held my ground and stuck out while everyone else failed or moved to another industry. Let them move to another industry if not getting 6 figures right out of college is what makes the leave the industry. I worked for free for a year just to gain experience in tech while and right after graduation. Let the system screen out the weak ones and only let the strong ones remain. Better for the industry.
youtube
AI Jobs
2026-02-28T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxG09YkxtniagOpLsp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxyPf9nQVWyP9soJ0B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOgRaQYGin_bE9POx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyYdug549rMrMJT2D54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGoOrzqpoyMy8ULtN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy85Y8cwwmIiVvsgGZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwkf1ENr8oRNUt2INx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwSmjqkQxlMiUiaRI54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnJk1iHHZtESAcuKd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDHPOb_NfMFC4XhOB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]