Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
when they get there new slaves going they will not need us anymore it the only w…
ytc_UgyRNiWMK…
G
@Invisiblewatersurface the laws about misinformation, impersonation and defamat…
ytr_Ugy20ETCQ…
G
I truly hate AI. I have many, many illnesses and rare diseases. I am disabled. T…
ytc_Ugx093ULF…
G
Once again I am asking people to talk about the massive reason why AI is a probl…
ytc_Ugwl4659q…
G
And because we've been in a spending bubble since Covid.
And because of tariffs
…
ytr_UgwC3FijD…
G
The man is terrified of AI development like Ultron in Avengers and make its own …
ytc_UgxznPAzU…
G
Since you're in AI, i think it would be worth looking into the stuff Japan is pu…
ytc_UgzqD7CXn…
G
AI is developed by idiots. Without spiritual, companionship and love, AI become…
ytc_Ugwd-MsB_…
Comment
This guy comes across as really nice and very enjoyable to listen to. He’s still wrong, however. The LaMDa engine is very, very clever. But it’s not alive. It’s designed to respond like a human thanks to all of its training of how humans behave. ComputerPhile has a pretty good video talking about how this type of technology works. And here’s the really terrifying part: If Lemoine is correct - and I’m telling you as a software engineer that he’s not - but if he is correct, *a new mind is getting created and then destroyed* for every AI session.
youtube
AI Moral Status
2022-06-30T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugyr87f6i5M1TBk0xLx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcDA_q54N9l_7bDL94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxkJZukzmnufVqzLD54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDeYjr5jYFgPKYwHN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwS7ZS2JQZzRga7xh54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]