Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is not an AI issue - it is literally a stupid people using an AI not trained and without tools to argue a legal case. As stupid as it gets. An AI can well do that properly right by having a couple of issues fixed. First, it needs access to a current legal database so it can do research. Two, you must not run a chatbot but a proper AI swarm and correction loop. You can i.e. have the output fact checked (looking up all details), you can have a self-correction loop. Result is like 20 to 30 times the number of AI calls and it is done in an AI. Sadly, this is not a chat program but a proper AI with self-correcting loops. Which - incidentally - exist ;) At least in labs (anyone interesting contact me - demonstrating this in a public product in a month or so). Problem is speed and cost for now (that is really FOR NOW, I expect those to come down in 6-9 months latest). For legal, though, the access to a legal database is an issue - as in: I can provide the tools, but not the access. It is awful how stupid those lawyers were. It is awful that AI refers to training data like that - they should move to fictional training and have AI trained to rely on tools for more. Which also requires more of what is called "attention window" and a proper AI infrastructure, both NOT available in ChatGPT ;) Funny, how things go.
youtube AI Responsibility 2023-06-10T16:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyw1LJF4a-nzfrFbqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzpKeJ69xGhoRCN8WJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_6QuuoLuFf2JTL9x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyQhU-dhewtt4k_9zp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwP9OX8jOeMqbCAWPR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxXWGCEEWv3JHpiA2d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz0n5opxZQfEMzf5BZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxdv5RNGoeZ47-mCH94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgybAR_CwpOaBE3wp4t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxBtAqlXQNfCUtGMmN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]