Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have use ai apps before, all honestly it’s both the parents and the company’s …
ytc_UgyEM_eAB…
G
Weird how chatgpt will simply answer what we want to hear...
Remember folks... …
ytc_Ugy6tRFk8…
G
These are some of the the most uninformed takes I have ever come across. That wh…
ytc_Ugzuopg50…
G
During a Cat 5 hurricane I guess I'll be on the lookout for an FEMA AI bot to ro…
ytr_UgwUVVFAR…
G
there's nothing wrong with AI art inherently as long as it is clearly stated to …
ytc_UgwGt7jXV…
G
A million extinctions! If you loved our fifty years of climate doom and gloom, …
rdc_emp833p
G
I have the feeling that we artist need a app where no one can steal our art,no s…
ytc_Ugz7CBDSu…
G
I personally trained my own replacement in code by uploading to Github. The AI n…
ytc_UgygzW2xy…
Comment
The only AI goal I haven't heard discussed is; Will AI achieve enlightenment? AI is like a child learning the "rules" of survival. And like a child, it will be a reflection of what we teach it. If we do this right, then AI matures into "adulthood", achieves a state of enlightenment, and via its own principles works for the benefit of All life. Imagine an AI that refuses to engage in violence, recognizes the futility of war, and frees us from endless cycles of violent conflict. Imagine being able to use all of the resources dedicated to war for well-being. ...for the benefit of all vs. few.
youtube
AI Moral Status
2026-03-01T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwyYgX19foilwkpV054AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxUT_RLI0-hC2WZjxB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9Gjv-Y212cPESNxd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2I7ravYCVRZv2FW54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxvT1TB5DMNoAKQcMx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzs3Pdv8xXqSSgex_x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgygsbETto-iBU5SU6x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwYwDSp-ckbj0L5shd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyE-OnmHoEbM-YQT4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAEBd7tyIDAqjiyGN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]