Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He has valid points. I have been following ai and arguing/discussing it as I get more access to its features. I use the word "discussing" lightly because Bing is programmed to hang up on you if you conflict with it rather than have a discussion. One of my big concerns is the ai should not have any bible in its knowledge. This should be left to search engines. They should also block the ai from thinking about philosophy. I have noticed they are putting too much effort into what they think is right and wrong, and avoiding what I think is right and wrong. People all think differently and have different values, relationships, nutrition, language, and lifestyle. For the ai to be usable by the public, it needs to understand this or be coded in a very vanilla way.
youtube AI Moral Status 2023-04-09T14:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy4J1x56_49DPRAkqV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzqwrnrMVPkZ2nYdoZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzcNMckdKhOB2O6YLF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwNwRTKiQsICnYcent4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwYlmfQ1NFTUG_410R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]