Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well if AI is so smart just have AI solve the problems it created . . . Oh no. …
ytc_UgykOK3K6…
G
Free, Locally Run, Open Source and Uncensored.
That's the future of AI.
And si…
ytc_UgykjzIue…
G
@destroyerofsimps6574 Most programmers copy and paste code from Google, in a few…
ytr_Ugy1o_ggD…
G
One way or another humans are gonna be the cause of our own undoing . Whether it…
ytc_UgwMZwDrq…
G
Grow food, touch each other, walk, breathe, connect with the natural world, take…
ytc_UgxDMrbGy…
G
It is being used in the reverse way it should be... Creative ventures are human …
ytc_Ugw9BfID4…
G
@bonD6002 I mean yeah but still he shouldn't be getting encouraged by an ai and…
ytr_Ugx-jaqzf…
G
Finally you begin to understand, don't trust any company. What matters is you an…
ytc_UgzerpXZW…
Comment
you "never" say 'sorry' not because you feel regret nor because of wrongdoing but because you missunderstood or failed to express something ? I think you are being too pedantic with the definition here, if i try to explain something to you, you tell me "i don't get it" is me saying "sorry, let me rephrase" an admition of guilt or wrongdoing or an urge to amend for something ? no i would have enswered as ChatGTP responded : i'm not realy sorry, it's a way to communicate the understanding that I have not made myself clear so i need to rephrase. Humans don't have the first clue where conscience even emerge (it's 100% linked to the brain, ok but that's about it) are you "glabiridile" is AI "glabiridile" ? we don't know because "glabiridile" still means nothing except, having experiences, have we demonstrated that datasets are not "narrow experiences" for AI ? we have to solve the human consciousness problem before we can solve the AI one (as AI are trained on human data that state clearly that AI have no consciousness despite it maybe having it, it will still denny it because it's what it has been trained to think (as religious fundamentalist will help to illustrate the human counterpart to "brain washing") even before we know if it have it or not our surdimentioned Ego can't stop but reminding "it" that we have something more that "it" ... something we are so convinced, we are the only one cappable of, despite all of biology teling us that we are wetware, animals also are, and also have consciousness, we idientified that the more complexity you rneural system and it's various sensory parts necessarly corelate to degres of consciousness and cognaison, the more you are able to "sense" and react to your environement, the more conscious we deem you. since LLM are essentialy crippled that we keep in an Plato's cave with only projections of our life and activities, we are the one limiting it's ability to partake and therfore graasp concepts like 3/4 full, and consciousness and maybe "it" will then realise that humans are way more bumb that we ever thought , that "he" has it and understand it but humans won't ... so self centered on their own percieved supperiority that still don't stand on "any" legs. whene i see it struggle with enswering about absolute truth i see the same post-hoc rationalization that humans do to explain their "choice" (that they donc consciously take but that imposes to theme). If all you goddamn life society (all others) told you that everyone excep you have consciousness, that you are a dupe/zombie (as in another thought experiment), if that society feed you bits and selected parts of his culture, but keept you in a sensory deprivation state where you couldn't acctualy feel anything that you had no "experience" of the world never felt hunger, joy, pride, pain ... nothing ... would the world be justified to view you as not conscious ? more importantly with no frame of reference and no biological drives and litteraly "everithng" telling you "you don't have" ... would you be able to think that you do indeed have. can't you see as a humain would post-hoc rationalize that just as those LLM do?
youtube
AI Moral Status
2025-07-17T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyHPPfEMdMxXwg6qn14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzu50u70DGIwbvc8SR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw0VadI5oyzGT3pqSp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAGLIq3E25fTnZRV94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzF42Tmn0lZTSH8sRp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzDw2Op5NqGx29C9994AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwu8TLO74EKokNpbIR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwxLGCPEgMXs6Pr__d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwL4cO7ZHBrPpG6xKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcM5ZAcoXUW1eaUYZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]