Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Large Language Models can't think they don't reason and they wont produce endless information." LLMs, Model Collapse, and the Conservation of Information. Prof. Georgie D. Montanez, Phd. https://www.youtube.com/watch?v=ShusuVq32hc ************** Some sources of data for LLMs... Google AI - Youtube 18%, Quora 14%, LinkedIn 13%, Gartner 7%, NerdWallet 6%, Forbes 5.7%, Wikipedia 21%, Businessinsider 4.5% ...etc. ====================== ChatGPT - Wikipedia 48%, Reddit 11.3%, Forbes 6.8%, G7 6.7%, TechRadar 5.5%, Bussinessinsider 4.9%, NerdWallet 5.1%, NYPost 4.4% … etc
reddit AI Moral Status 1775570225.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_oerqeyb","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_oesy8yw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_ofjeuaq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_ohx6xm8","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_d3tksyi","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]