Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Current LLMs just predict the next most probable token. They seem to be intelligent, but I don't think you can compare them with human intelligence. Currently, they have no trigger or intent to do anything. They don't run on their own and try to achieve anything. Why should that change? They might become better in terms of finding better solutions to our problems, but I can't see a Superintelligence somewhere near. Besides that - Mr. Yampolskiy seems so smart and he is so deep into that topic - I will start to reconsider my opinion. My prediction is: AI will hit a wall soon and we achieve only minor improvements to their problemsolving abilities with newer models. We will need less and less computing power to achieve the same level, but I believe, Superintelligence is still 100 years away...
youtube AI Governance 2025-09-06T15:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzxXh7xyQngFi2qpN94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzJv25P4290mILANm54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxAxoY7FO7pKHnQ43t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfVuTFJ55VMAQr0AR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyKTuoInCLWhRDO0FZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwTftWHjhiKR8HlosR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIyjGPOUZ18zB9Ual4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw8n8G1U_7vYS0tHF14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy8U6hUaxqaxjzk_ZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyzzuN_Q0Lz5iglQMp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]