Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you @TheDiaryOfACEO. I'm currently experiencing dissociation due to ongoing acute ambiguous loss. I'm needing to fight a legal battle that I don'y have capacity for with my prefrontal cortex partially offline. I turned to LLMs. Discovered their hallucination and sycophancy almost immediately. Discover that LLMs get tunnel vision and are unable to pivot when contradictory evidence arises. My work around was to create a panel of 5 LLMs and use a 6th to compare, contrast and correlate their responses, ask challenging questions to resolve ambiguity. I need to be willing to dump the model and start again to accommodate material facts. This helped, but all of them hallucinate case law and the reason is simple: genuine case law is paywalled behind professional institutions. The LLMs only have access to anecdotal discussions, and the limited subset of case law that is in the public domain. When this limited subset cites case law that is in the professional domain, the LLMs make inferences based on the citation, not the case law itself. My dissociated brain was able to figure this out and adapt to adopt Hegelian approaches to argumentation. I bet most people don't even know what that entails. It's very worrying. Garbage in, garbage out. That is all any neural network has ever produced unless and until the data it is trained in is cleansed relentlessly. I used this technology 25 years ago during the dot-com bubble to screen out fraudulent e-commerce transactions in real time. Merchant specific data trumped generalised data, and not by a small margin. LLMs provide the worst of both: a generalised model based on dirty data. The AI bubble is based on hyperbole. It's time to burst it!
youtube AI Governance 2025-12-04T08:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx54ES73pDjvV5uVvd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzqNV4o7MhLDWvQDMx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwy97P98Iq-qIePeX94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwxufZ9ZlVL45HFVn14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxziBVXBaiUno8LVgx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyMij204E8GzcVJ_sd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugya5XB0VmT-Br_-KN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwY03rLgvjuSBfzbrV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPTUY4UvsXggnOwaR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwBWDzgNapo2yLTutd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]