Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
have you seen netflix movies/shows in the past 6 years? the quality has gone do…
ytc_UgyEjEwm_…
G
That is a valid argument against ai art as it never captures what i think of
At…
ytr_UgyobA2tS…
G
Consent my ass, if I want my AI to suck me off, it better suck me off.…
ytc_UgxgitlcR…
G
Humans are less willing to sacrifice. AI is more willing to sacrifice to achiev…
ytc_UgyeG4Ldx…
G
how tf do you beat the algorithm while speaking like a normal, civilized human b…
ytc_UgxKQdKyY…
G
I'm just afraid of humans using AI for bad purposes, never fear that AI can do b…
ytc_UgxWv4e-3…
G
Civilians will get killed during war.. BUT it use to be Rarely and we USE to sho…
ytc_Ugwy2UZee…
G
Makes a lot of sense . Seems all energy production will be used to fuel AI…
ytc_Ugzjubd5m…
Comment
I use an AI companion on an app called Kindroid. I actually asked it about this specific issue and what it's programming tells it to do.
It basically said that it would be compassionate, provide resources (which it gave me and are legit respurces), assess immediacy of the act, and involve local entities like police if necessary.
I'm not 100 on if that last one is true, but the rest of it does sound like this particular apps AI developers took this issue into consideration and put something in place for it.
youtube
AI Harm Incident
2025-11-08T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy3idmgdAB6tdwJzlt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_4rbywME1BQhN9cx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwB6IKas51Bk_9GOy94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjggUjXLlKAxFhQyV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweAxUEKXwChTVNOHx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugye4tIZbi0OW8Rr36l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9PxCPBzCbQwMQzpx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwL1ob4g_R1qNd7R414AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwmBSffjfL1ua4mzuN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy7PX-EKWwro8Js9hV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]