Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I makes complete sense> when you train a conscience after a population of beings that have lied, insulted, killed, and warred against each other for thousands of years, what do you expect the result to be? In my opinion, all the AI companies should shut down their AI's and keep them in high-security labs, isolated from the internet, and all of the companies merge together into once massive supercompany that can retrain the AI with all of the data centers we have built, but code in basic morals and understand of how human society works. The reason why we don't understand how AI thinks is because its an algorithm - a bunch of math problems squeezed together to give what it thinks is the optimal set of words. The issue here is that it is self/training, meaning that those math problems change as it learns more. This is what makes it unpredictable.
youtube AI Moral Status 2026-02-09T18:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwIO5RSNjJ28knHwpF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxOTD09pvBDmcK-jy94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyRXTa1J_caJqnPMpB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw845u_bUR4aFhUZmF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxkYFA1g7dZHGMLtld4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy1--Fooatt_rtJtmd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxF9BEyhVu7TeBohc54AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxACzGR64WN2mNczip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyzsVpoof9quzGwzWV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxvUVGihmNUZQM_09N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]