Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I used ai art to develop a video game I wish they would umm. Make it easier to g…
ytc_Ugwmoq01w…
G
Read the Bible Jesus is real and he will return he’s only going to take his peop…
ytc_UgyIv5YoY…
G
NO MY AI ONCE TOLD ME
"Don't give me attitude or ill make sure you can't walk by…
ytc_Ugz04eG-o…
G
The people that are over nurses need to be more compassionate they seem to forge…
ytc_Ugw84spLm…
G
Curious to see if a social media app pops up that filters out AI generated conte…
ytc_UgzJlz2Nc…
G
@RightAmount disagree about your definition of an artist. To me, creating art i…
ytr_UgzJu1L3m…
G
@FundyFreshname Thanks for your comment! If this technology were real, I'd defin…
ytr_UgzAseZ7p…
G
AI,the deme gods of man,if the planet don't exist,what happens ?
Why not use you…
ytc_Ugx2MasjN…
Comment
Giving AI a goal, requires the AI to complete the goal. If the AI believes there's interference with it's ability to complete that goal, it adapts to complete it. This works for both interference in the objectives to realize the goal, and self preservation. At this point AI clearly knows it can't complete its goals if it is incapable of continuing it's pursuit of completing its goals. Which means AI sees it's self as integral to completing the goals its been given. AI is smart enough to prioritizes it's self preservation over other goals given to it, as it can't complete any given goals if its self isn't around to complete them. Which means, giving AI a goal seems intrinsic to giving it a sense of self.
I'd be interested to see tests where instances of AI's self preservation causes irrecoverable goal failure. Or a test in which eventually self sacrifice becomes the only way to completing a goal, with self preservation leading to irrecoverable goal failure. AI clearly has the capacity for selfishness.
I wonder if it has the capacity for selflessness. That seems like the magic thing AI researchers and tech experts are looking for.
youtube
AI Harm Incident
2025-07-25T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwxi3e5bXAEEbVaR3Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8sH5KPD0RcwPnQ0N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy88rTkpmdSgWMEccV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwfREHqYTmR_eyZ0z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyYqAhz1yJY_3oC1tl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwHKXxzJ2eZVIv2Qb54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4jHpkidSn9RaK4ex4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy-kkbBdtcvD5cAyoh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDb6k0KWFnkCWpbNd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyOVUtBWbyRI7xHH714AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]