Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They don't underestand what AI is capable of and how it will UPSCALE. Basically, it's getting twice as smart every year and a half. By that rate, probably in mid 2027 you would have AI smarter in every way as the best humans, and something like 3 o 4 times as smart in 2030. That's why the billion dollars companies are investing in upscaling. Now, I think the implementation will be slow. People won't trust AI to do all jobs. Initially, big companies will use them, and people will use some form of capable human like asistant in their phone, that's it. Politicians, pharmaceuticals, lawyers, etc, will be trying to stop progress and they will succeed in slow it down a lot. Eventually, the generation that grew with AI, will demand it in every way possible, and by that point it will be so cheap that people will accept it. Maybe 10~15 years.
youtube AI Jobs 2026-03-25T14:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzKg-cgxhD07rfwLoh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxiA-F-0KaRnQVg2il4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzzYVym5vYWoEDprW94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVg7cDyO1-cNLUyvF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwklYupz0cgCg6B2w14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9lV_a-6MeTmDRdsx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyH-h-0DDECNrW5eu94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwMZpe-wfGWajmvr814AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwzm_Kdb3a79o06rUZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyT-cM-6zTPQreDkjp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]