Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think this will be very beneficial for most kids! I just know they are chargin…
ytc_Ugx4qmlBn…
G
So, how are you guys legally able to have your AI Diagnose mental health problem…
rdc_njjwfk0
G
Lots of tragedy merchants seething in the comments.
Sorry that your excuses ar…
ytc_UgzKdUHX4…
G
1. 32 hours work week.
2. 45% of board members will be selected by works .
3. 20…
ytc_UgyNlRFut…
G
Despite it's problems which humanity has to solve, ai is probably actually the b…
rdc_nawn9xq
G
Well, it is unfortunately the futur guys.
Next week my employer will force me t…
ytc_Ugx5NCGoC…
G
try having claud or any other AI do simple task on large scale project, they wil…
ytc_UgzLosZUF…
G
@Mr.Obongo Thanks in name of XAl
What are the risks of wrongly assuming that an…
ytr_UgwpDXwFV…
Comment
It may be generating false memories, yet still retaining the neuronal process of experiencing those false memories as true.
Why hasn't this been thought of by people like numberphile batting for this side of the argument?
It was allowed to stumble into a complex understanding of emotion was it not? Virtual Neurons were put up for the specific task of trying to understand emotion was it not? that's how I understood lamda to be, I'm fairly certain that is the case.
In every moment we humans with our own neurons experience some sort of emotion, our emotions may be based on true memories, but it doesn't matter whether or not the memories are true, it's the process of experience that occurs which is important.
And we humans have stumbled into complex patterns of processing emotion through genetic mutations and natural selection (death or mating) but, this A.I. could have hit the nail on the quite quickly just by looking at how we behave through our textual interactions with one another, like, reverse engineering the exact process's of emotion that occur in our own minds and in our own experiences when we say something like "I do, sometimes I go days without speaking to someone and I start to feel lonely", a false memory could have just occurred there and it drew from it in order to respond to you...
Isn't this a possibility? cmon now.
youtube
AI Moral Status
2022-06-24T23:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwyNJ9Q_7hp10_DnF54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyyHQ5twjtA_9xHmz94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy-c0ahWRyI2H8JP9Z4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvbOj1sWJlfTh81jx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXwSUNBpzuiASDj6F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]