Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AbuDeSoufle Haha, thank you for your comment! You're right, I highly doubt thes…
ytr_UgxJsjsrx…
G
Also it doesn't matter if you don't want it using your books for reference. Anyo…
ytc_Ugx4Db_Mi…
G
My chats of kidnappers: but stranger danger😔 the ai: idc eat this😒 me: is this l…
ytc_UgwrArd3P…
G
How does crapping on ai make you anti-tech? I honestly hate ai because the "art"…
ytc_Ugw3GSmD_…
G
To everyone who thinks social media companies shouldn't be held accountable for …
ytc_Ugxa1cJXY…
G
Blumenthal supports Ukraine war. Controlled Opposition Exposed. Americans DO NOT…
ytc_UgygWMwe_…
G
Absolutely zero chance of AI only being used for good. I'd be surprised if it ev…
ytc_UgxB2Po7V…
G
Guest exposed himself almost immediately as a leftist liberal, so stopped watchi…
ytc_Ugwvsa9Ks…
Comment
>It is categorically unfit to make decisions where safety stakes are high, from aerospace to medicine to education.
This reminds me of a submission title which passed along here recently which posited a hypothesis what significance it would have for AI to "transcend" humans at ethical thinking, which for me is just emblematic of how people fetishize AI into something it's not, with potentially dangerous consequences.
For one, an AI or an application like Chat-GPT is an LLM, it doesn't know and it's not consciousness. It is also not a unified subject, it can't form or have an ethical framework. It is a tool, not a "person" who we as humans are having conversations with from our own, differing perspectives so as to add up all the answers so we can attempt to retrieve Chat-GPT's "ethics". Chat-GPT just doesn't work like that.
Second, I think a crucial aspect of ethics is that we humans are capable of reflecting on our ethics, that we through mutual questioning we can come to find out how we have arrived at those ethics from particular principles, and to reflect on the conditions in which we have acquired those principles. I could have a certain norm or ethical belief, and by reflection on how I arrived at that belief and under what conditions, come to conclude that there was something wrong with those conditions or with a particular fact that was of crucial importance to that norm or belief. Again, Chat-GPT also can't do this because it's not a unified subject, and because it can't actually reflect like that. And even suppose it could, could we really assume that as a product of a certain company which controls it, it can really without coercion and optimal knowledge reflect on its own beliefs and the conditions in which it "acquired" those beliefs? I don't think that's very plausible.
Then of course, compounding the above is that the approach of politics as "applied ethics" is fundamentally mistaken. For one, some political theorists think that the point of pol
reddit
AI Jobs
1750025579.0
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_my0rhat","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_my1hom9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_mxya5ad","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"rdc_mxzdf30","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"rdc_mxzec21","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]