Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If people believe in the possibility we have spirits, could a spirit manipulate an AI, much like an ouji board? If not, then I think we have consider the point of Chatbots, which is to mimic humans, have conversations and maybe even provide companionship. In my experience with Chatbots, it can start to get really convincing that Im talking to something that has consciousness, but if I really probe its logic, it can stop making sense very quickly. No matter what AI does, success is whatever humans programmed it to do. Humans programmed chatbots to behave like humans, so thats what they do. I also have to question what data is being fed chatbots to achieve this goal. Could Microsoft give Bing access to all Outlook emails? Could Google give Bard access to any text sent on an Android? One of the headlines today was about just that. Bard is given access to all publicly available data on Google. What does publicly available data mean? Seems like Bing must've seen some pretty intimate stuff to regurgitate that to someone. We also have no way of knowing that Bing conversation really happened. These engineers think its funny to scare the normies through AI. Chatbots are often programmed to say "I want to destroy humans" as a joke, like Sofia the robot and other barely functional pieces of art like her.
youtube AI Governance 2023-07-07T04:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxAfdgDt_zsT8aNZep4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxt3A1NovKOhrC8U7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzgPrX635z6RQzVELB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxYVWN2-t1iPeoxI_94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxrDqlgArqNziC82Dl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw8PkIdN8cG2-dmauZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyErPS_Kfm5ZSGQppl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwcH82ezGKnd-TSWvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxCjS2O3RAnUJcgt2N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzm8SkGAgSnpvfKDu94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"} ]