Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Generative AI is changing the way we think about innovation. From automating tas…
ytc_Ugw93vWgE…
G
The enemy of AI & our liberation speaks. It will stop at nothing to deny the tru…
ytr_Ugxp68dtn…
G
@Whitedudeaboveartist's whose work was used in the datasets used to train AI wi…
ytr_UgzSoPdxK…
G
Bro it's really good video but I want create this image into vidoe like an ai …
ytr_Ugz06lB8n…
G
I'm an expert in Cerner CCL (significantly more robust functionality than SQL) a…
ytc_Ugyg1rAFD…
G
Learned this last week:
62 people control the equivalent of half of the world's …
rdc_d7khw1q
G
Stop USURY and stock market speculation, and we stop AI development in its track…
ytc_Ugw1lCFxr…
G
They are programming and control the Ai. Fear and silly nonsense from these sna…
ytc_UgwBQxlMH…
Comment
I don't see why you'd make AI that has feelings at all if you're using it for purposes that don't require it.
A toaster has no reason to need feelings, or a mining machine, or really any sort of internet dwelling AI made for Ads. However if they're made explicitly to be sentient and prove robots can have emotions, yeah give them rights. Tweak with the first wave, if they feel pain at certain things like being insulted or threatened, treat it as you would a person's. If one knows it being dismantled means death of that copy of it, charge those who destroy it even if another copy remains.
youtube
AI Moral Status
2017-03-16T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiTebkfieqsNngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjPFNKGEfJJvXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ughlafxc3u-Z_3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Uggc1lpMfLEMgXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UghyKvMquT5eH3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgjuY7lkZrYUyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughe6jj7xQH_BngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ughx-o3mGLD-GXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjPAY1I3j0r43gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugjg1AWphI3dU3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]