Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
12:01 - It already exists and we use it for podcasting via NotebookLM. The only aspect of podcasting that will work is that people want to see people and not two ai "bots" chatting but I'll be honest, working in the ai space is very interesting and the speed that things are moving are at another level. I've been in the office since 6am, just working on ai models. It's 8:33am, I'll go through until 4pm, then go home and keep working from 5:30pm to 11pm, Monday thru Friday and about 4 hours on Saturdays and maybe 2 on Sunday. I've been doing this the last 2 years. If you think you're behind with ai, you're already lost. I know this because with all this time, I struggle just to stay on TOP of what's going on currently and don't have time to research what's ahead and coming down the pike. If you haven't worked with ai yet, man, you are done. I'm 53, and have been a computer nerd since the Commodore 64 in the early 80s, worked in I.T. at major financial institutions in Manhattan and currently manage our network, note that I'm a Marketing Director, so I do a LOT. But if you haven't taken the plunge, man, you are WAAAY farther behind that you THINK you are. I'm surprised that this dude doesn't know all this 12:40 - we've been creating commercials using fake voices, fake people that honestly look incredibly real for the last 10 months.
youtube AI Governance 2026-03-05T13:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxAJgj9z__zY8ufuT54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxmlOAZsWqb205iU_N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyIKjMw9_J8nwtKVvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwL8RKztzZxFx-MQMl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzb0SAbH8h0_O6OP-N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwDbAd8ibkYhgSfCOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy1EmU7lvQVbGScJNh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzZPR_7FxzDIJYDxvx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwaP2NxHIfvQN1C0Oh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyfb_SA47cXNWDrzMZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]