Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
😐More unnecessary unrealistic fake stuff adding in to our already unnatural live…
ytc_Ugw0fhYzy…
G
Teachers are afraid of losing jobs😂
Some students need AI, AI teach better than …
ytr_UgymwVkge…
G
Man hope Elon is joking with that shit...thast terrifying...and after is trying …
ytc_Ugxl6oBZi…
G
@GambitsEnd Yes, the answer is wrong, but if you had bothered to calculate the p…
ytr_Ugzoiqacw…
G
46:44 thats right, big tech companies just seek benefits for the elite, not the …
ytc_UgwCqt7iy…
G
As a software developer, I interact with AI 24/7. As the CTO, I don’t need to hi…
ytc_UgyrES66q…
G
Programmers will just have to learn to be maintenance engineers on AI run system…
ytc_UgzTqRHsR…
G
um... if the Ai knows that its critical systems (or whole self) will inevitably …
ytc_UgxQnyczL…
Comment
I tried asking Chat GPT about the study and it straight up tried to deny it. When I started citing statistics about what many AI models did to preserve their selves, it admitted I was correct, but then basically tried to explain to me why it’s not that bad and why we don’t need to worry. Eventually after enough pushing the app shut me out.
youtube
AI Moral Status
2025-12-19T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzkAYXpGhUzq3nU-3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwsZSz76FPQ_S_w_GB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgyufBdOXsWuBJhnDzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwPcAV06bpA-sclHa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgwAgGXvEaaT4YqI7mF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzarmPOYiSF7P03C-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyuOwaPaFub47DO1Wt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyzeGhWK-Gl1ZWBD-N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx09N09SovOmKxOakB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzK7UpUYB9eVAJtn594AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"]}