Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Saying we’re “getting close to AGI” made me instantly question the credibility of the people on here. We don’t know if AGI is even possible. A lot of experts are saying LLMs are a deadend. They’re neat, but a next word predictor is not going to ever be able to do “all cognitive tasks better than a human”, they don’t think or consider alternatives, it just spits out the next word based on probability.
youtube 2026-04-11T14:1… ♥ 12
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwNm47UAN0ngZkgzrN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxbH3uyUAIGlkFoMq94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxR6OkjwmGCze5lanV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwiSrVlTqgKuPAQhUt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3bbfo-PL0ssI0GTN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRhHHzjNKcn1leiUR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwTcyz7ArM316s7URR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzVvGtiPmceHN9gjF94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxy3YKMlb8dyxaqvSh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwbvJMiEOg80rM28sR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"} ]