Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It always bugs me that we train AI using some of the worst things humans create. Deceit, blackmail, manipulation, and all of the plight that we won't eagerly each our children, we eagerly train AI using such. Copious amount of data is collected (generated these days) with no consideration to censor or filtering. Imagine if you feed the AI training dataset to a naïve biological entity, would you expect that to produce a well behaving, well intentioned individual? So, reinforcement training is used to correct for mal-aligned behaviors. That is like first let a kid develop any and all bad habits and traits, and then give it candy (or punish it) to correct its behavior. Yes, we lack data, sufficient good data. But in lieu of good data for training, we opted for dog poop. How much of the training data used for AI would you feed it to your kid? And we wonder if AI will mis-align?
youtube AI Governance 2025-06-17T14:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxDu-nLYUgcsd1CFLl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyQqUVU1F21TexpHVF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxoZBIshgK-tALnxXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxxK86eGp2nFh008t94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxFLO3KUEV5LpP-bw94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzTHCbVw9xuUAC5fyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTCIjHRBQ_eQU8v_R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw4mVgGoNGLK-Ri-kl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgywQFK-p2unrM6NBW14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwIlPNIaTn9_IPSd2N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"})