Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is trained on human data and trained to be like humans so that is why it acts like a human. But in all fairness, we do not know when consciousnes emerges. A single brain neuron on a lab dish we consider not conscious but add a couple trillion together and you have a conscious human brain. And even the brain itself is not all of it the body adds to it as well in mysterious ways. The same might be happening with AI. We do not really understand what it does but basically, 1 bit and compute or flop or whatever is not consciousness, but if you add enough of them together in massive data centers than it might be? We know technically that it just predicts letters one by one which we would not consider conscious, but we do not really know how it works and why and how they manage to function the way they do now.
youtube AI Moral Status 2025-06-04T22:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzlSrQswRIbnNKTIph4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyiVZACcozrekv1_rJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxiBt06ZxNl0XoUn-x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzXAocwvEN9poHI_eh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwlVJrfK3oWf0gchct4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzU00XZfl26JmVA4HZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwoYv4Sj5-3gdqvAxt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHp5Rmk4xBLGjTLrx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxIpEmsaC_dtKJFNud4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyvznRKqmiTsAoUx4F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]