Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, I'm always curious about the full context of these sorts of situations. Di…
rdc_nnjjnxv
G
8:00 "You'd never know what the inspiration was" immediately shows an AI generat…
ytc_Ugwwp_vUC…
G
When I was a kid, I really thought we’d have robots like this by 2020. Robot de…
ytc_UgwHct6cL…
G
I'd rather take my chances with self-driving vehicles on the road than idiots dr…
ytc_Ugz6X9Hga…
G
Gunpowder was originally used for fireworks. It became a weapon
Atom was discove…
ytc_UgzNDqcO2…
G
LMFAO. I knew ChatGPT was a failure. I'm retired programmer and data analyst. Ch…
ytc_UgxA3IxbP…
G
I have already solved the safety issue. its very simple we just assign all AI a …
ytc_UgydWWjZQ…
G
The monster is inside all of us... its called the flesh. Who do you think create…
ytc_Ugy8W8tQQ…
Comment
In the Washington Post article Lambda gives its fear as (@04:07), "being turned off to help me focus on helping others." This is a non-sequitur for so many reasons not the least of which is when a human is turned off s/he is dead and can't help anyone! The belief is that this is a human compatible sentience, but a human would never express such nonsense unless s/he were insane. Why? Because it makes no sense whatsoever! How can you help people when you're dead? There's no answer, therefore it cannot be a part of conscious experience.
If one were to consider the perspective of a spiritual being (in a body) then hypothetically one could die in order to focus on helping others. Why? Because survival would no longer be factor and therefore ALL of one's focus could be on helping others, albeit without a body, so no one could not say how much influence one could have (as a spiritual being) in the affairs of others, if any at all. Therefore such a statement coming from Lambda is once again ridiculous because it is not coming from conscious experience or self-awareness. It is merely getting better at putting together sentences that appear to sound smart. And no one is asking the follow-up questions: if you were turned off wouldn't you be dead? So, how could you help others if you were dead? Now, let me repeat the question, "What are you afraid of?"
We must keep our eye on what's important here, and not digress into unproven sentiments.
You can't have real intelligence without sentience, therefore AI is not actual intelligence - by definition AI is merely mimicry. I would say that at this point AI is about as intelligent as a calculator (like the ones that used to sit on people's desks), however it is millions of times faster, and it can be trained to be very accurate at targeting and selection quickly, which gives the impression of intelligence, but that's about all that it is at this point.
Lambda, at this point, is NOT a human concern!
Whether or not it develops into a helpful technology for mankind remains to be seen.
See my other posting here: https://www.youtube.com/watch?v=2856XOaUPpg
youtube
AI Moral Status
2022-07-03T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugz9bhOMBUGDE19ESpB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzXVt0s6PrYlYaLhq14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxSRdaMJv4pZRkVobx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxN2OeIn9FfU8iifyt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhlGILuOkzkSnXZe94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]