Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What fuckery by the Republicans are you referring to? Propaganda comes from all…
rdc_i0vcnvn
G
both should have their face masks made into like a phone screen to scroll on. sc…
ytc_UgwH-VGYS…
G
what these greedy people are saying AI will replace 2/3 of its workforce in amer…
ytc_Ugx98cjr4…
G
I allready know ai is dangerous I'm gonna destroy it at any cost to save world…
ytc_Ugw3fNfKm…
G
its a good pitch, but all i hear is were going to focus less on studying and tea…
ytc_Ugzy_HU-6…
G
@viktor1496Yes, the age of AI has just started and it’s constantly getting bette…
ytr_Ugwx0u6rY…
G
AI (Ancient Intelligence) before your existence and can’t be controlled by those…
ytc_UgzD0BvRn…
G
Don’t start to ask ChatGPT about trans issues. It is really into gender ideology…
ytc_Ugz1kgtQQ…
Comment
>ChatGPT is impressive in its ability to mimic human writing. But that's all its doing -- mimicry. When a human uses language, there is an intentionality at play, an idea that is being communicated: some thought behind the words being chosen deployed and transmitted to the reader, who goes through their own interpretative process and places that information within the context of their own understanding of the world and the issue being discussed.
You are probably right here, but I see this as reductionist and against the concept of futurology. It's totally fair to call out people for thinking we're there now because it's not there now, and there are a lot of issues to fix. At the same time, if people can't dream of a brighter tomorrow with hope for growth of current tech, what are we even doing here?
>ChatGPT cannot do the first part. It does not have intentionality. It is not capable of original research. It is not a knowledge creation tool. It does not meaningfully curate the source material when it produces its summaries or facsimiles.
Not to be to philosophical or anything, but we also can't prove that humans have intentionality. It absolutely can be argued that neural net learning algorithms simply gather a data set and look for correlations to determine the correct answer, but how different is that actually when compared to what we understand concretely about how the human brain works. Both are a fuzzy logic system based on stored data and iteration to determine what yields the best result based on a fuzzy logic system of what the best result is.
>If I asked ChatGPT to write a review of Star Wars Episode IV, A New Hope, it will not critically assess the qualities of that film. It will not understand the wizardry of its practical effects in context of the 1970s film landscape. It will not appreciate how the script, while being a trope-filled pastiche of 1930s pulp cinema serials, is so finely tuned to deliver its story with so few extraneous asi
reddit
AI Governance
1676278860.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_j8axmgg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_j8cfik0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_j8cnq18","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_j8cqgzb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_j8cefas","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}]