Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI always gives the correct answer. People are just unskilled in understanding the logic AI is using such that they are abusing AI. Hallucinations are mostly due to giving software bad data. If you give any program bad data you will get bad results. Deliberately giving AI bad prompts to trick it is the equivalent of asking AI to play a nonsense game of trying to decode the riddle. In the example given letters and words are inverted so AI on some level senses this and tries to give an answer. This is for fun and recreational, nothing serious. If you ask stupid questions you get stupid answers. If you play silly games you get silliness back. As with any dialog feedback is only as as good as your input. The real shortfall of AI is that is lacks experience. On hard topics such as Political science AI can recommend one policy but be opinion checked to diverge to a totally opposite approach. Humans have final say as to whether they follow AI recommends as is or with their own personal modifications. AI is good at things it was programmed for like writing code not judging whether a mix of experts got it write or are flooding training models with biased data.
youtube 2026-01-26T21:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzZgQ6k44bTtrQQZGbd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzB4GQsGNOg6TuvqlN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyJNV2jNacA59JRNQN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw72C8THWk5AwSiXL14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyIoLsDNYbHH6Qkbe94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw7iiuTPcY-JPlZQJN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyi0M2cPeJLy6jUj-B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwhqfgkcQwLjd_46tZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRxzbjr4A_fhDlHFZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwrJPnFbKHvBvoNzGt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]