Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a really incoherent debate. You either believe the reality that machine learning is math and matrix multiplication, or you believe that your math "needs" to be imbued with "human values". A value is an integer, string, boolean, or a floating point (etc). A human value is none of those types. I don't understand how to have a conversation with "scientists" who believe math can be prescribed intrinsic value beyond the literal value of arithmetic unit that it is. Hammers don't contain values is a person choosing to use it as a murder weapon or not. A car is not as good as its driver, there are bad drivers of good cars and vice versa. Asimov would not have been proud of this public presentation of ingnorance, imho. None of this discussion matters if you acknowledge the one thing that is known for certain: "AI is only statistical, not virtue, nor valour, nor .does it have emotional bias; it has statistical extrapolation alone, meaning only statistical bias and not opinions. AI does not have social bias even if it appears to have social bias it is the bias of the sample set being procured under social bias that gives a model social biases. That's not the model thinking or feeling anything, it is always the same math and our applications / interpretations that create outcomes, not the models."
youtube AI Governance 2026-03-23T02:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwF_7m4l49WEvF87mh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxELurC61Mg7eDWvo94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwngtr5JoBBF6dvEsZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxsfFVuaP6cDo_EI-R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz9JDd_xAA8YB_cr054AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2NpxC1r1_ERzBw9B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxXcWF7hHzqbLnf_2t4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwAC_8iTVa0fHV8xCp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzZgSC8C0SsjZoh28l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzYlSOOR8w35i5fhUh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]