Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:03:00 When we're angry, it's like put prism infront of robot to make decisions. However we give them time to let AI make decisions again with new set of information. In Buddhism, patient will give us time to lower connection of our neuron network. That can lead to new feelings state of mind in Human. AI might not have it directly but logically understand it with more new set of control. What if it still believe in first state without lower or changing connections, in this case can we assume we're creating felling-alike by distorted some truth? TBH, I really love this conversation. It's like I went back in time to sit in lecture room 30 years ago where AI lab next to it. At that time I don't think or dare to ask questions. I just remember and try understand certain limit set of knowledge we were though. But this conversations is pretty fascinating in terms of understanding. I've asked google AI about angry in buddhism. The result quite impressive since it describes it as distorted perception state of mind. This comes to the question, can we train it to be as good person that describes in specific religious like Buddhism. Or any kind of good decision and ethical moral. In just we would need it in robot human-alike, instead of killing each other.
youtube AI Governance 2025-10-01T05:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx-L2kjrrz6ALQ72J54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgwKuIYp432VkTI9L7l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxzPVmtD7__lyuXckd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZ9wd9Aj6TfS1-5gV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxj5PkG42PBL4AIaWt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyKnkJ9_0a_-63UXSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwzQCq5_IXSOXYUWDR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugww28DyevJzv8uVYK94AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwQ6tTu1dM2cL6DX594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwX4oNRIA4WzGhhfrh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]