Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can we keep a secret from ai if we only speak to each other and never write it d…
ytc_UgyrnSDmn…
G
Ai created more jobs in China..Great US journalists-historians on CGTN The Point…
ytc_UgyFu75lH…
G
The issue that you are not seeing is this: The entire need for humans in the wor…
ytc_Ugx0Dqe3h…
G
This guy (Sam) doesn't really struggle. Yeah he faced some unpleasant idiots who…
ytr_UgzlKeePX…
G
Comprehensive huh? Did they outlaw giving robots guns?? No? So Terminator good,…
ytc_UgxN_J7qV…
G
There is no dark side, AI will improve all our lives, I'm looking forward to bra…
ytc_UgydzAgDP…
G
Nothing seems impossible anymore. It's disgusting the expert mode is absolutely …
ytc_UgwCBss1I…
G
Amazon prefers self driving vehicles and have drones delivered items over humans…
ytc_UgxdyhTqs…
Comment
@Damien-y9c Before I go further: I'm a software engineer, studied computer science for 8 years and have been professionally writing physical simulation software for around 15 years. What I mean by this is that while I'm no AI expert (not even close), programming and software development are not entirely alien to me, and I feel like I can hold a discussion about the subject.
I understand your point, but imho it's not entirely true: neural networks are *a bit* of a black box. Yes we do know how a neuron works in isolation, yes we understand biases and weights, yes we can decide on the topology on the network (how neurons are connected to each other, how many layers, etc) and we know how to compress/simplify the data fed to the network for training. So yes, we know the code inside out. But the actual data flux inside a network is still a bit of a mystery, and that's kind of the point of neural networks: If we truly understood how a neural network does its job, it would be trivial to write a simpler, faster “traditional" program to tackle any task that a neural network could be trained for. At this point I believe we would no longer have any practical application for neural networks.
Moreover, we don't know the extent to which a biological brain and a neural network are qualitatively equivalent: it may be possible that the difference between them is just a matter of scale. But we don't know if there's a quantitative threshold either, past which sentience emerges: given enough training data, enough hidden layers, or (in a word) enough complexity, would a neural network develop something resembling sentience?
We also don't know nearly enough about biological brains: what's the critical mass required for sentience?: minimum volume, amount of neurons, synapses, external stimulation, etc? Are there other pathways that lead to sentience, besides a “typical” brain-like structure (that neural networks try to imitate)?
One of the reasons why we study AI is that we hope to learn more about ourselves, so we might one day discover that we're not as complex or special as we would like to think. Or, that the requirements for sentience were way simpler than we though. But until we can confidently draw a line and say "here, anything past this line is definitely sentient" we don't know how close to it any AI would be.
Saying "the code proves it is not sentient" about an AI is like saying "the cells prove it is not sentient" about a brain. You know how a cell works, you know how code works, but you don’t know what sentience is. Define sentience, only then you can prove whether something is or is not sentient.
Currently we attribute sentience in a very arbitrary, fuzzy way: we know a rock is not sentient because...well, it does not react to external stimuli and that seems to be a prerequisite for sentience. Are plants sentient? I'm sure some people would argue they are... what about a mouse? or a mosquito? are bacteria sentient? a human fetus, at which point during development does it acquire sentience? we're not even able to give a definite answer to that, it’s often up to personal beliefs. We’re still far too ignorant about all this.
I’m not saying LaMDA is sentient, but I do think Blake presents some very valid questions and concerns.
youtube
AI Moral Status
2022-07-09T11:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxrcQFPgHRFwm6MDhJ4AaABAg.9dGGvDX2aw39dHa5QmE_Vk","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxrcQFPgHRFwm6MDhJ4AaABAg.9dGGvDX2aw39dHyEvSlbQk","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugw40_NP11jbDjRwhpp4AaABAg.9dGGTLJjjn49dKhMufCJR1","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugw40_NP11AaABAg.9dGGTLJjjn49dLeJLviZW4","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgyO62Om1P2KZYWPx3d4AaABAg.9dG5SAsPOTR9dIxZuqYj51","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugxvtigk2sxu5IjOWx94AaABAg.9dG4lXRmmS19dJaLa5A1fh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwE77ZdjeaSQsINvuB4AaABAg.9dDgnAJMvvm9dMrc3xtnWa","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytr_UgxiIyEknaRONURscF54AaABAg.9dDCYFdSi_D9dESgfdCpAv","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxiIyEknaRONURscF54AaABAg.9dDCYFdSi_D9dF_daO50mH","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxiIyEknaRONURscF54AaABAg.9dDCYFdSi_D9dG1v0eFHfU","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]