Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
YOU DO NOT THINK AI IS REAL BECAUSE YOU ARE NOT A MOTHER. THAT IS REAL. YOU ARE …
ytc_UgxySp_Ec…
G
I didn't watch this video entirely but it gives me some uncomfortable feeling I …
ytc_UgzzVmZUb…
G
Why make electronic devices conscious? Is there any reason? We are good with the…
ytc_UgwAO4Qn6…
G
What can a ai do to us ? We made ai , ai didn't made us if we can make them we c…
ytc_UgyTryqSF…
G
@I-Just-Took-A-Big-KlausShwab Well you are saying that is dangerous so im asking…
ytr_Ugyhrs_de…
G
They are humans but they put on there heads projects and shaved hair and they ac…
ytc_UgwNQMaLE…
G
I see a ton of artists respond to AI generated pictures online by drawing their …
ytc_Ugw_lS1Jq…
G
You think hackers do bad things now!? Wait till they lock into an AI Robot…
ytc_UgzCuevMM…
Comment
/u/willbell asked:
> Holy crap it's you.
> What do you see in Less Wrong? They seem very intent on replacing areas of philosophy with non-critical versions of those areas (e.g. aesthetics -> neuroaesthetics, as if those are asking the same questions, etc) so I find them hard to take seriously, yet you seem to.
>
> I never really got the p-zombie argument, you use it as an argument for non-physicalism but how is it that you move from 'p-zombies are conceivable' to 'mind and brain are actually distinct in a non-physicalist way'? Why couldn't the further fact in which we differ from a world of p-zombies be certain facts about how metaphysical emergence works rather than some sort of psychological stuff?
> What is the best argument against non-reductive physicalism in your opinion?
> At any point when you had longer hair, did you ever consider quitting philosophy and starting a rock and roll band?
i don't know about the less wrong blog specifically (it seems to be moribund these days), but i've seen a lot of interesting ideas come from the "rationalist" community of which it has been a focal point. the most obvious is the issue of AI safety in the context of superintelligence, which has become a huge issue both inside and outside academia, and for which the main credit has to go jointly to nick bostrom (who's an academic philosopher but also connected to that community) and eliezer yudkowsky (who's a nonacademic philosopher who founded the less wrong blog and has been at the center of that community), who explored the issues for years before the world was paying much attention. there was also a very interesting proto-decision-theory (timeless or updateless decision theory) developed a few years by eliezer and others at less wrong, though i've been disappointed that no one has been able to give a well-developed clear and rigorous statement of the theory since then. i also like very much the idea of "applied rationality" that was a focus on less wr
reddit
AI Moral Status
1487780711.0
♥ 169
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_k34ngkc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_k37ooch","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_k37cgip","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_de2jklb","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_de2leu6","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]