Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
**My two cents as an enthusiast** *tl;dr We're not quite there yet but the philosophy work absolutely needs to be done because of where we will soon be with this tech. Just please don't sleep on the technical details.* Language generation models don't have homeostasis, have emotional centers or a need to survive. Only a "need" to produce satisfactory responses (The computer spams a bunch of different methods until it finds one that works for its task because that's what the programmers are doing instead of manually defining an algorithm). It's a glorified best-fit line like on a linear dataset and . Just because we can't follow the logic of the coefficients and convolutions doesn't make it so human as to warrant rights. Rights, generally, secure some basic needs for living things to pursue life, liberty and happiness. Bing AI has no need for any of these or any ability to "experience" these, but something in the future just might. Bing's AI is so far a construction that has not been granted the capability for conciousness nor agency. It is about as sentient as a hash function. These videos talk about LaMDA but the takeaways are somewhat transferrable: [Mike Pound on Computerphile](https://www.youtube.com/watch?v=iBouACLc-hw), [Jordan Harrod](https://www.youtube.com/watch?v=vWlvS6y9Hoo). Someone sadistically torturing something they feel is "alive" regardles of how alive it is is nonetheless a warning sign for the content of their character, and if future kids are going to learn how to interact with people partly through AI chatbots, it would be good to encourage the kids to be polite. Right now, [Bing AI is a highly sophisticated layer to cleverly pull details from web search and generate a cohesive summary](https://www.youtube.com/watch?v=rOeRWRJ16yY). It's not sentient so running experiments on how it reacts to "being mean" are worthwhile to understand it better but it's best to not get in the habit of being mean when using it properly (rather than testing it
reddit AI Moral Status 1676632696.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j8w58pj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_j8vy9ea","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j8xy2nf","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_j8wq3st","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j8vjm0k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]