Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I did some entry to this topic myself... Hopefully more people will open their eyes. I think science should serve the good of humanity, but if someone claims the 'basics' as their property to the point of annexing everyone else's work, it discourages others from fixing abandoned ideas. James Watt used his patents to block superior high-pressure steam technology for 30 years because it threatened his lower-pressure design, effectively slowing the Industrial Revolution. Grigory Perelman walked away from the Fields Medal because he found the 'narcissism' of the establishment (claiming credit for work they didn't finish) to be morally revolting. This feels similar. We need to honor the people who actually found the 'high-pressure' path (like Mikolov), not just those who claimed the territory decades ago. Comment I posted and was banned: "Watching this, it’s fascinating to see Hinton defend the 2015 Distillation paper as a 'groundbreaking' feature, but we have to credit Tomas Mikolov (2013) for the real breakthrough. For decades, the 'Godfathers' (Hinton, Bengio) were stuck on complex, biologically-inspired models that were too slow for the real world—basically building a 'map of a nonexistent planet.' Mikolov was the one who looked at these 'abandoned ideas,' stripped away the unnecessary complexity, and found the 'last piece of the puzzle.' While Hinton provided the 'Deep Learning' philosophy, Mikolov’s Word2Vec proved that Simplicity + Scale > Complex Theory. If Science is a relay race, Hinton defined what 'running' was, but he was carrying a heavy backpack of theory. Mikolov dropped the backpack and actually reached the finish line. We shouldn't debase the 'Engineer' just because the 'Architect' had the theory first—without the engine, the blueprints are just paper." For those who want to see that breakthrough paper so successful that it actually convinced Google and OpenAI to pour billions into the "Scaling Hypothesis"—the idea that if we just make these simple models bigger, they will become "conscious." Reason why why have LLM's (AI) deployed and running is on google's scholar / arxiv by title "Efficient estimation of word representations in vector space".
youtube AI Moral Status 2026-03-04T09:2… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgywrwffJ7UVykrk7yN4AaABAg.ATrnMoEVCNMAU2fmSj4yiL","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_UgytV1pB9MINc2dSpMd4AaABAg.ATrcnWLGdy8ATrh3JzvqSD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgytV1pB9MINc2dSpMd4AaABAg.ATrcnWLGdy8AU4tgmIWtPW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxirK7zMYMdyUSLAzV4AaABAg.ATrbu5oGmuTATvm5xCXx90","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugy6u3kyBQ36uFtdJrt4AaABAg.ATr_NEw4itvATyLme0Wc2B","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyiFYVU0bGYFXPyrgB4AaABAg.ATrYdG8eEdnAVtXEZVlk14","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_Ugw_4BOXYSEPssNONSt4AaABAg.ATrRRVuFqhaATrnVG9a8Yv","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxnrBON8G5xjj0mjAd4AaABAg.ATrQWtkKELnATrzgThaOq9","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwrWbcNdt7nemWUMHd4AaABAg.ATrNfQtKpOYAUO48VCIvqD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwrWbcNdt7nemWUMHd4AaABAg.ATrNfQtKpOYAUknhEH2Xig","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]