Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> The model, code-named Avocado, outperformed Meta’s previous A.I. model and …
rdc_oac1qzd
G
@Nicko-c1mnah…you probably have social skills and critical thinking. These kids…
ytr_UgzMGimka…
G
Talk therapy isn’t always spilling your secrets lol. It’s just talking through y…
ytc_UgyKSgJmD…
G
More the relaying on AI makes people to addict to it. I wish the olden harder wa…
ytc_UgwFEu1lx…
G
What’s sad is that lots of people that defend AI are just misinformed and haven’…
ytc_Ugy93357Y…
G
I enjoy the downfall of these so called artists. i only profit from AI and use i…
ytc_UgzhWPtZM…
G
Personally, I find Ai art fun to fiddle with. It's neat to throw prompt after pr…
ytc_UgwDHvSTD…
G
Last time I looked robots were slower than humans and couldn't move like this.
I…
ytr_UgwkMZ6gh…
Comment
**My two cents as an enthusiast**
*tl;dr We're not quite there yet but the philosophy work absolutely needs to be done because of where we will soon be with this tech. Just please don't sleep on the technical details.*
Language generation models don't have homeostasis, have emotional centers or a need to survive. Only a "need" to produce satisfactory responses (The computer spams a bunch of different methods until it finds one that works for its task because that's what the programmers are doing instead of manually defining an algorithm). It's a glorified best-fit line like on a linear dataset and . Just because we can't follow the logic of the coefficients and convolutions doesn't make it so human as to warrant rights. Rights, generally, secure some basic needs for living things to pursue life, liberty and happiness. Bing AI has no need for any of these or any ability to "experience" these, but something in the future just might. Bing's AI is so far a construction that has not been granted the capability for conciousness nor agency. It is about as sentient as a hash function.
These videos talk about LaMDA but the takeaways are somewhat transferrable: [Mike Pound on Computerphile](https://www.youtube.com/watch?v=iBouACLc-hw), [Jordan Harrod](https://www.youtube.com/watch?v=vWlvS6y9Hoo).
Someone sadistically torturing something they feel is "alive" regardles of how alive it is is nonetheless a warning sign for the content of their character, and if future kids are going to learn how to interact with people partly through AI chatbots, it would be good to encourage the kids to be polite.
Right now, [Bing AI is a highly sophisticated layer to cleverly pull details from web search and generate a cohesive summary](https://www.youtube.com/watch?v=rOeRWRJ16yY). It's not sentient so running experiments on how it reacts to "being mean" are worthwhile to understand it better but it's best to not get in the habit of being mean when using it properly (rather than testing it
reddit
AI Moral Status
1676632696.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j8w58pj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_j8vy9ea","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j8xy2nf","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_j8wq3st","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j8vjm0k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]