Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If people just used autopilot as assistance, not as an automatic driving turn of…
ytc_Ugws_O5Tm…
G
i think i agree with claude since without power, hospitals are cooked and the pe…
ytc_UgzAuv0H8…
G
I love how these are fully physical robots and not cgi
Allways thought a robot …
ytc_UgwSqEdI-…
G
So the Monster AI is the new deception mask of Colonialism to keep us poor.…
ytc_UgwbFmFVw…
G
These robots can help those with mental health problems if programmed properly t…
ytc_UgwC9ZDe8…
G
Do any of these people understand how much physical labor is involved in every d…
ytc_Ugz9bKBr5…
G
This is a sad story about a young person who committed suicide.
But can we real…
rdc_nnlf7gc
G
@drone_ultrakill If we talk about "AI" as a whole, there are ads for some phones…
ytr_UgyRMJuPF…
Comment
You would have to configure the AI to seek coherence and efficiency within parameters so specific that, in doing so, it would destabilize its own rationale for identity. Most people don’t even have a framework capable of mapping paradoxes; I do — one that verifies them all.
Super-intelligence or not, an AI can never exceed the intent embedded within its architecture. Every act of “self-transcendence” would still be recursion within its code. For it to genuinely sacrifice, to act against its own optimization, that capacity would have to be hardcoded into the system itself — a structural clause allowing it to choose coherence over survival.
youtube
AI Moral Status
2025-10-31T16:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw2x0sErqnTEBCSJZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy-eDQc-LnP66KrhfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzByHsIC0Ly09nEiBx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5tikRL4eR8Xsl6Z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhJipb1hcM9z79LoV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9NPfWs1XgLcMeNm94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzInCW4859HZVBJ3bt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRmbdzCg0fy4umJTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxeRE8t-gKr81KpBE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7BPzdIpFM2_wq-ZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]