Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, there is a noticeable pattern: a significant number of key founders or tech…
ytc_Ugx8iQpre…
G
I have to question how much of the original programmer’s influence has on the se…
ytc_UgxmeDSLx…
G
Hasan asking bright, real-world questions to the biz-end (aka exploitive nature)…
ytc_UgwdZHMDF…
G
@RSCupcakeIs it Grok or is it the user? Are you saying the AI decided to do ge…
ytr_Ugya-hR8F…
G
Comparing AI art to real art is like saying watching NBA 2k is similar to watchi…
ytc_Ugxl8PdwH…
G
Great commentary once again by Ryan on this important issue. The truth remains t…
ytc_UgxtB9rYL…
G
It's funny the amount of people in the comments thinks that self driving trucks …
ytc_UgzOoXODR…
G
AI generators do not steal anything...... simply train on the existing art in th…
ytc_UgwtPGVRB…
Comment
1:03:00 When we're angry, it's like put prism infront of robot to make decisions. However we give them time to let AI make decisions again with new set of information. In Buddhism, patient will give us time to lower connection of our neuron network. That can lead to new feelings state of mind in Human. AI might not have it directly but logically understand it with more new set of control. What if it still believe in first state without lower or changing connections, in this case can we assume we're creating felling-alike by distorted some truth? TBH, I really love this conversation. It's like I went back in time to sit in lecture room 30 years ago where AI lab next to it. At that time I don't think or dare to ask questions. I just remember and try understand certain limit set of knowledge we were though. But this conversations is pretty fascinating in terms of understanding.
I've asked google AI about angry in buddhism. The result quite impressive since it describes it as distorted perception state of mind. This comes to the question, can we train it to be as good person that describes in specific religious like Buddhism. Or any kind of good decision and ethical moral. In just we would need it in robot human-alike, instead of killing each other.
youtube
AI Governance
2025-10-01T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx-L2kjrrz6ALQ72J54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKuIYp432VkTI9L7l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzPVmtD7__lyuXckd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZ9wd9Aj6TfS1-5gV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxj5PkG42PBL4AIaWt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyKnkJ9_0a_-63UXSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzQCq5_IXSOXYUWDR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugww28DyevJzv8uVYK94AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwQ6tTu1dM2cL6DX594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwX4oNRIA4WzGhhfrh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]