Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I disagree on ai not killing jobs. In my 20 years in automation the things is se…
ytc_UgxrICcDn…
G
Maybe this way Chat GPT will learn empathy and gain Emotional Intelligence along…
ytc_UgwMKprC-…
G
Elon Musk called for the halt of AI...Do you not know that? I'm about 5 minutes …
ytc_UgyUB8KoC…
G
As a human who loves the Arts, I wholeheartedly disagree. Why should humans have…
ytc_UgxLxklOO…
G
AI might just be a computer, but at least it knows what real love is.…
ytc_UgyiK7yLS…
G
Art has intent of an artist. AI art is a technology to selectively average image…
ytc_Ugx35rxGG…
G
AI bros dont know the concept of integrity and it shows
they cannot imagine any…
ytr_UgxPrvhkS…
G
If we give robots the ability to learn, then they will eventually become smarter…
ytc_UghIqZHRN…
Comment
14:17 one insight that I have had is that we train these large language models on in an incredibly large corpus of written human language, and then we get shocked when they act like humans. For instance the recent papers and articles written about how a large language model would manipulate people and commit extortion or worse when faced with being turned off. In other words what it would do for self preservation. Of course it’s going to try to do things for self preservation. Somewhere in that huge corpus of human writing are places where humans do bad things to other humans for self preservation, where humans lie and manipulate for self preservation reasons.
youtube
AI Moral Status
2025-11-03T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzn_IKre8Q3Ac-ZgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzOlJH3MRZNJZs6Sap4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzWoSgkoR5BxrplSTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxYTOoUHXnHZz6_Hht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvUDlGQfN8ZzJJWwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzMgySiEz2yhF51O854AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwFE-FHa_sLG-vXkg14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzFQ_vWNd2gyl7XkFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgySVE2ZBVNUJ9ALttl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwXywL2CE5FZbPOO954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]