Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate AI PEOPLE. I DON'T THINK WE ARE HEADING IN THE WRONG DIRECTION. BUT YOU…
ytc_Ugzig9TbI…
G
The fact that it is getting billions of dollars in funding means the wealthiest …
ytc_UgwYuugk9…
G
I'm all in favor of banning AI regulation. But whatever. Guess other countries w…
ytc_UgyEPwkfY…
G
This is actually a good philosophical question in art. You're probably not gonna…
ytr_UgzGeDJQ_…
G
Okay... i wouls also be careful to classify ai "art" as a real art form... BUT. …
ytc_UgziOlojW…
G
AI is only as strong as its the materials that it's made of and the people that …
ytc_Ugwn-aqPG…
G
I apologize for the confusion. Here's the correct transcript for the video "Real…
ytr_Ugwn90rrK…
G
See that a computer science engineer and data researcher also committed that it …
ytc_UgxAbD1eg…
Comment
I think the timeline is off. As of today AI cannot reason and depending on who you ask many don't see AI developing reason in the next 10 years or more. And as long as AI can't reason there will still be jobs for humans. Agentic AI is the hot topic of the moment. It comes close to reasoning but not actually. It will mean that AI can handle more complex tasks which will replace many jobs.
Many will be surprised at the fields that get hit hardest. Some think the field of medicine would be one of the last to get AI replaced but in my opinion it will get more attention due to the rising cost of medicine. Replacing doctors and nurses with AI robots will mean cheaper medical treatment and fewer liabilities from human error. And don't get me started on all the layers of administrative staff of hospitals and insurance companies that won't be necessary.
youtube
AI Governance
2026-02-14T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwabpL54NfVypDzs514AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx01a9cpgFQ_hxhI6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxccvAi1AhA501Sg3F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLm9nDy9E_tMsg5CV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVCcqpXxFZ7_jx6s94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHHpozge4HzJ0A55B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyqwdQPOW5RB8jK8154AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzqm1WfR3-9k7-unEJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6SUJt3hFNcbCTm094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxyr34bbVZoxcplUI14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]