Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Konnichiwa! O genki desu ka? Fellow British citizen here! Assuming you might be of Japanese heritage Justin. But I am learning Japanese currently, not too proficient at for the time being. :) As a user of ChatGPT, I do wholeheartedly agree with you, because I do know full well, that technology like AI itself, is not this tool that is infallible or flawlessly accurate. When I see inaccuracies, I correct it, especially for example that we are all sentient Souls/Spirits that are bound within bodies to walk the physical plane. And yes, that's how we work. LOL. But not just that, when something that's written out by AI, I do make an effort to modify and add in extra paragraphs and sentences related to the topic or layouts in question. I do handle this stuff carefully and delicately like if I am a surgeon treating a sick patient, or a crate of unstable dynamite. My question is though, is how we can make AI come across accurate sources as a form of on and off option for the user? Because I know OpenAI cannot unplug their dirty ears and listen to actual feedback and concerns like I have. It is precisely why, I think it is best to rely on independent AI manufacturers instead and give them the support we need. Because corporatism is not on our side. They are on the side of self-interest, money, ego, and power. So what do you think of my idea on relying on independent alternatives so long as we give them feedback on giving advanced AI chat bots an accuracy bias that can be switched on and off, but be kept on by default?
youtube 2025-11-19T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzo6aXa_FBDxXMTUrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6KoCW_wOO5lcRSVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEzz6i1SsFkpqgFHh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypmvANZ8cRs3eijDp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKeUPU0_ug2-GVO5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxXZDCq7tzUdYE1asN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwSB1hsLKLZ1Wlw-7B4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzpqNNZziwo_twWwgR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxcNZOVoCgEpT1XFHp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwVOjTJ0v8VTwmFa3R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]