Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am about 80% sure the ChatGPT knows exactly what you are doing and is willingl…
ytc_Ugyhxu8Mx…
G
No, LLMs can't demonstrate humour they are stochastic parrots and are just putti…
ytc_UgwpDODe6…
G
AI x Traficional music production is similar to Uber x Traditional Taxis... ada…
ytc_UgxFw8-5u…
G
AI can only reproduce, we will get tired of that, the human brain needs new thin…
ytc_UgyTyigMa…
G
@spycenrice8108 No it isn't. The reason people such as yourself get butt hurt a…
ytr_Ugyt2P4ef…
G
If we have all of the housing, clothing, food, and healthcare that we need (whic…
ytc_Ugxu16P9W…
G
For some reason a certain group of AI bros are super hostile towards artists. Wh…
ytc_UgyMhl1yi…
G
There is only 1 person I have seen online that uses AI to create something that …
ytc_UgwpyCD7r…
Comment
I’m seeing a lot of interviews here about AI, and they are interesting. But so far, I’ve only come across one that really had more of an “AI isn’t what we think it is” or optimistic perspective. That got me thinking:
If so many experts seem to be aligning around some version of “we may be only about two years away from everything going sideways,” then why do you keep having the same conceptual conversations over and over, just with different guests?
If we are most likely headed toward some kind of disruption or doom, and everyone more or less agrees on that, then please give us some more good, relatable, and entertaining content too.
Kevin Hart — that was great.
Dave Chappelle — I genuinely think he would be an excellent guest. Not just for jokes either. He’s incredibly insightful and has a rare ability to unite people effortlessly.
Jefferson Fisher — please bring him back.
Sammy Gravano — also an interesting choice.
And if AI content is a must — which, to be fair, I do love — then bring on some everyday senior leaders from operational worlds like payroll and HR.
I would love to hear a real conversation about how AI is actually infiltrating those environments, how frustrating it is because it is wrong a lot of the time, and how almost all the focus right now is on enterprise-level IT AI governance. What about operational and functional governance?
HR and payroll departments are being forced to figure this out on our own. The Big 4 is not dropping millions to solve the practical, day-to-day AI problems being felt by boots-on-the-ground HR and payroll professionals every single day.
I would love to hurry up and earn some kind of certification that actually qualifies me to evaluate the algorithms that are increasingly “running” the systems we rely on. But outside of my own standards, my own methodology, and the hands-on experience I’ve been gaining since late 2023 in my little corner of AI governance, there really are no clear official learning avenues for that.
Right now, my options seem to be:
A) teach myself, which absolutely means falling on my face somewhere between 2 and 7,000 times — because that is how trial and error works, or
B) get an AI governance certificate where the curriculum is mostly geared toward hackers, privacy, cybersecurity, and threats, rather than the real operational decision-making happening inside payroll, HR, and finance functions.
It blows my mind that management and staff alike really do not have a reliable source of truth in this climate.
I would also love to hear some thoughtful theories on how the human race is supposed to make a living once AI is fully here and work, as we know it, is reduced or gone. Will lived experience become more valuable than formal training? Or will formal training matter more? What are we actually supposed to be preparing for?
Sorry for the rant, but I wanted to be candid: the episodes since about mid-February have been hard, and sometimes even painful, to finish. A lot of them feel repetitive and, honestly, boring. The AI guests often sound like they are repeating the same ideas in different packaging.
I hope this feedback is helpful. I’m being blunt because I genuinely love the whole crew and I’m a loyal watcher/listener. I just wish the episodes felt a little more inclusive, a little more grounded, and a lot more useful in terms of what real people can actually do to keep pushing forward and keep growing. Thanks, Belle- Anna D. 😮
youtube
AI Governance
2026-04-15T13:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxpZox4gJ94iWbaN3Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlR8uDpxiJwjfpPZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz0OfYxIVvUmXgoB414AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyr-FOMgx-f49C03x14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugybr6aKf4f5IGWc9Ep4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyjgmKInNawHIbwGDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzIKtxMTADOXIdT5JZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyT6Iq8GDzKbreSiDV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzNBqDL3Fu0MU8DlJd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugza8wZGYSfEUmuPItl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]