Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing that I think is missing for the interview, I imagine because Dr. Yampolskiy is convinced it will happen, is that it is NOT guaranteed AT ALL we are actually going to get these AGIs and even less guaranteed we will get SuperIntelligence any time soon. AI companies keep selling the idea this is a done thing, but they are lying. We just saw ChatGPT5 fail to improve in any real and meaningful way from the previous versions and the rate of improvement has slowed A LOT in 2025. Of course Google, OpenAI, etc. keep saying this is coming as this make their shares go up but LLMs can no longer keep improving and we just saw these AI are not really thinking, so a new radical invention is needed and we still dont know what that invention will be. Not saying this wont happen, A LOT of money is right now invested into making this a reality, but at this point in time it kinda seems that for the time being (next 10years) we might stop at having some really good expert systems, and robots very good at some particular tasks, but that is it.
youtube AI Governance 2025-09-06T16:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxbnra59AgSIWThOq94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxvlrRWMKaa8ssHAHd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugwq0ELkXFj08WXjH7Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzhLoTcs3IP-dSzCCt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwIlPbeVX-d_ihP17t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz7KsjCr0ysdDJA1vJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy7RvO_hsLdjdoDkAp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyv7KN1SuQO-GzWb0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwbYc5nrAIpvFMYbRV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxlEoSRz5msP5GCSwd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]