Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you define artificial general intelligence as equivalent to a person with an above average IQ of let's say 120, then I think we have already achieved it. An LLM can already out outperform humans with an IQ of 120 in the vast majority of tasks performed by a white-collar worker. People who push the date of AGI much further into the future always seem to be comparing it to a creative genius who can make a breakthrough discovery in a given field. Let's be honest, there are very few humans who have ever done that or will ever be able to do that. I'll grant that AI is not yet there and I don't know at what point it will be if ever. But that's really not the issue. The issue is that within the next couple of years, it won't be worth it for many companies to employ nearly as many humans any longer. As to whether AI creates utopia or dystopia or enriches the 1/10 of 1 % at the expense of the rest of us, that's almost a moot point. The cat's out of the bag and there's no way to put it back in. If you think otherwise, look at how well we've done with peace in the Middle East and climate change. So, best bet is to hope for utopia and plan for dystopia.
youtube 2025-06-07T21:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyXeoYlwBLZSQxXfFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwJ46WaAfrWr-MYoo54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBkcArHMuNxtr9rTh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqxlFRKGrBGM8UZsh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxDdDWGrBpOfQUCVMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzYcc29gZgExjVtbR94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2nyUmGIAF8KQuwFx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgymhgAhLSGUR-0iiWR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzQ1g3tC7kpD1o7ABJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx7bOzAbYmgQ87JG0N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]