Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a materialist too I don't believe in consciousness so I would say some part of my brain is firing neuron and that firing of neuron make me want something and consciousness or the impression of it is the best thing evolution found to make us survive through difficult time so the flexibility of our brain And that's my big counterpoint to what is saying because right now AI is: input(lots of them) -> layer({F(in){in(i) * weight(i)}) -> layer()....layer()-> out(lots of them) so there isn't any self reference/recursion or anything it's just taking input applying the function and returning output on top of that it's not learning really it's static (to train it you need to apply a other simple function but backward from out to in and you have to know the expected out) for me the impression of "consciousness" and the ability to learn directly from our brain come from the fact that our brain constantly reference itself in a cyclic manner and that's not something that AI has been conceive for as doing that will take infinite time before reaching the out node same for the learning process (due to going in a loop) for us there is dying off of signals and the output is kinda everywhere for us neuron seem to reinforce there connections independently from each others as electrochemical reaction pass through a connection in that sense current AI model look like one BIG neuron. so the "consciousness" or emotion are really not there at all it's just a completely static sequence of matrix vectors multiplication
youtube AI Governance 2025-08-15T14:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz7Jx29YCCFsAVZwwJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy2aKAwFdBX03uF9TV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfBV6Z20mhY4nbSLd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxmbAiK346ZYBoL4a14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx_mzztV4E3kPpdIIV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgxNA8KhDXojt6be5M54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwO1BWuogsYBNLCTFV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwvWKwjvVDyxuIY9Z94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZ3XK9McoJXMSpwud4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwr8B-dzLtIgGy8i-14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]