Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think a bird is probably one of the worst examples used for this explanation. These guys are drawing a hyper realistically detailed face using the loomis method just to erase everything to explain what a circle is. No bug exists today that looks like a bird. The only man made objects you could confuse for a bird are planes which we designed after birds. Two wings are enough to identify a suggested bird. The size of that bird can suggest the type of bird. Birds are the only two legged animals fitted with wings and eyes on the sides of their heads. Is it a bird or a plane and why is Superman not the obvious choice? Because people don’t fly, bats fly at night and the phrase of bird or plane was never considered by people born before the invention of the plane. We have more time spent identifying birds and drawing obtuse V’s that make it pretty easy to identify a bird. LLMs can be convinced that the bird is factually a dog dressed as bipedal Superman, under the guise of a non binary fighter jet in one session and forget the conversation happened in a new session. LLMs are hype and not the ideal version of artificial intelligence the tech bro CEOs fear monger over to increase sales. MIT’s paper proves increased degradation in LLM responses while Harvard proves zero reasoning in a finite box of captured preexisting information just before the degradation and hallucinations begin which is why the default action when opening any chatbot application is to immediately present a fresh conversation devoid of any previous context. The infrastructure and energy, the math don’t, in any way, shape, or form equal artificial intelligence. When math doesn’t add up in mathematics and science, we say the information is false, misleading, inaccurate, or incomplete. When we apply mathematics, psychology, and science to LLMs— we chalk it up to a mysterious “black box”. Couldn’t convince me.
youtube AI Moral Status 2026-04-23T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy3hqFw1nesKa5wk554AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxhEgk3_0BZdpd_g2B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzkyCHH7psF6QwNMBB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0slqsWwT3Og4oTQd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx0ZDv29eQEtfubMTd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwXJVvc3D2KgrnRfyl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzb7jp2U9bwhiFmcRB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzMD23_cA4AaKX4WIZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwIUV3cwvcKH7eatBB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy4UFBfoRroKUoxTDR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]