Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm an expert in this space: natural language processing and natural language ge…
ytc_Ugyowe00i…
G
I don’t get why artist hate AI art! Like if enjoy drawing/painting just keep doi…
ytc_UgxwSgOCI…
G
The pentagon has to move forward with AI research, we have to main our edge. The…
rdc_karbhq9
G
That sounds nice but if the government regulated it for the well being of societ…
ytr_UgzG8n-1n…
G
The future of an ai world is fantasy . Not real . Not reality .…
ytc_UgyEc_eu2…
G
Your Grok Sexy conversant will be regulated, you will not make money of it.
😅😅😅…
ytc_UgyltDVXB…
G
That's exactly what's going to happen, I don't know if anyone remembers when the…
ytr_Ugwm-aVIj…
G
If dad created the prompt itself there is A LOT of subjective input in there. Da…
rdc_n0kpk00
Comment
OP - “It's just scrolling through the internet, compiling information, It's not capable of coming up with independent thought or conclusions”
This is incorrect. This is something you’ve decided about AI (similarly to the people who think it’s sentient), but it doesn’t reflect reality.
LLMs are capable of scraping reference material to source information directly, but that is NOT how they operate fundamentally. It is not a super-algorithm that crunches probability analysis when prompted.
It’s a database thats been disassembled, with each piece of information reorganized based on the state it was found in. Prompting is an autonomous navigation of that reorganized database with that navigation being probabilistically weighted toward prompt relevance.
Prompting is more similar to typing in the search bar of a documents folder than it is to a complex calculation.
The truth is that basically everyone has a very generalized concept of intelligence. We think of it as one big thing, which AI either is or isn’t. But really intelligence is a collection of smaller mechanisms, each with different functions and origins.
As it turns out, some of those functions are entirely unconnected to experiential reality, and instead are actually *embedded in language itself*. Reasoning is one of these functions. LLMs do have the ability to calculate reason (with varied but improving rates of accuracy), and the evidence of this is in literally every response. Its not intelligent in the sentient sense, but it is a functional representation of a facet of intelligence. Similar to how deterministic calculators can do math but they aren’t intelligent, LLMs can calculate reason.
And through that, they are *absolutely* capable of original and objective analysis if its is weighted properly.
reddit
AI Moral Status
1750948352.0
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzyf2fn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzw0tro","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_mzvumgp","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_mzw6p90","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzwu2u1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]