Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Learning Language Models are incapable of ever achieving complex thought because they only predict the next word in a sentence, similar to autocomplete." I've heard this argument a large number of times, but I don't feel that it is valid because I follow the same process. When I construct a sentence, I have a vague idea of the shape of what I want to say, but I still construct the sentence one word at a time. I very rarely predict the last word in whatever sentence I am constructing, for example. I don't know if LLMs have the potential for achieving AGI, but this isn't an argument against it, IMO.
youtube AI Moral Status 2023-12-08T18:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz6lCWMyna-A4p_opx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwMotypJQgs_m3JFlV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgywSIbmpsSjJPNc8SR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwwycaGOCAnG9d44Dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzFMtjfnzC2BwJNfsd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz08LF6Ni62Q-bwUjB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwZJ2HVfDHN-vp4UGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxu3YWzqu-qjg-J2NB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzY5drA1yysvbg3tz54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRl2PJtzGBkekh6xh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]