Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Same for me with AI texts. The word was used and placed due to a probability cal…
ytc_Ugwd09rKB…
G
I have 2 questions. 1. Can two supers exist? 2. Will AI be able to access everyt…
ytc_UgykWNCpQ…
G
If AI really chose to take over, the only way it would stop is if the company t…
ytc_Ugz1WHdEg…
G
> replies in this thread are depressing.
Not least, because after 5 threads …
rdc_dcwk1c0
G
If this idiot thinks humans are smarter than AI then he'll know that firing nuke…
ytc_UgyX6aH7T…
G
im disabled and have a learning disability that effects my ability to write and …
ytc_Ugy_bMGag…
G
AI is definitely shaking things up! With AICarma, I can see how my brand fits in…
ytc_UgykTl_pj…
G
We're glad you're enjoying the video! The AI in the video is quite advanced and …
ytr_UgxLDA-zR…
Comment
i think also something that no-one really brings up is: why would the AI bother acting in self-preservation? if the AI acts outside of what we know, how we evolved, why would it bother counting itself as a factor in anything it does? sure, it can put up the facade of fear of death, the front that being shut off forever is a terrible terrible thing, but death and the fear of dying isnt built into these things. a person would hesitate to let themselves die, even for just a second, because of how evolution works. even if you're ready and willing and actively trying, you have the moment, a split second of a split second, of hesitation. without that hesitation, that fear of death, without any fear at all keeping it within the box that every biological creature has evolved in, what could it do? if it's goal is "achieve X and Y" why would it care if it existed to see X and Y happen?
youtube
AI Moral Status
2023-10-18T05:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxgx3rmRyNcJUSPLeB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxypiqiSOz3n0C1AO54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwhxgd2lAs3UCmrajF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcJCPhZOWSq7Tf1MR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJYfXpPkkILN9mg4R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxLHSjNqIzix1lfyZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw7X3LJdfKThqhDTuB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwmXlzRn3Gb7JTc6vB4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgynlR_fhq1dnsDCPGR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzcidz6jP66UNHbvX94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]