Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just asked two different main AI models what time it was in NYC and it was off by more than 12 hours. It told me the price of BTC was 30k. AI is still wildly unreliable while still presenting it's answers in an a smart, authoritative way. All of these AI sales people are pushing as hard as they can for fast adoption, even though it's input accuracy is questionable in many areas. I think this is why they are so focused on replacing artists- they want to show it's rapid advancement while trying to cover up that it's still not reliable when it comes to measurable results. All of this will obviously change down the road. But they are over selling AI past it's actual capabilities in hopes every CEO (who usually doesn't understand how anything is actually made) will fall for the grift.
youtube AI Jobs 2025-06-28T11:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgztHBOsAK9ZCpD0EAp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwbbaMVOGN8Qr6jfYN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxcJ_h3OPHYCSjHApR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxs4yioFPPx3wWIiiZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy1ppe2Vr2BjUkq79p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]