Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@kingy-ai Try copy pasting your replies into GPT-4 and ask if it is AI generated…
ytr_Ugz0JOwW9…
G
Ai art is ethical and people who say it isn't are either dumb or coping.…
ytc_UgwimMxSy…
G
@buildsucceededThink of ai as a complimentary technology, not a competition to …
ytr_Ugz1evMeD…
G
Yes though we feel some are uncomfortable questions, many creative people are no…
ytc_Ugwsnas-E…
G
I think the word is sentient. AI could be conscious because they are programmed …
ytc_Ugyd9Vy1j…
G
Perhaps this is a reboot of how the Tartarian Empire was erased. They reached a …
ytc_UgyVQ6PlM…
G
We are either scared of AI or so convinced with its answers that we humanize it…
ytc_UgwT2Zq0b…
G
An LLM won't, but at some point in the distant future, if we make it that far, I…
ytc_Ugx_QbR6R…
Comment
That's the problem with LLMs. They are not really AI centric. They are word prediction centric with AI aspects layered on top. LLMs are not “intelligence engines” built from first principles of reasoning. They are essentially statistical sequence models trained on huge datasets to predict the next token. The "intelligence" we observe is an emergent property. When the model accurately correlates billions of complex linguistic patterns, the resulting coherence and synthesis often mimics human logic and reasoning. The core debate in AI centers on whether this potent mimicry is sufficient. They need to rebuild AI from scratch. LLM's are cannot be used as the core of AI models. They can only be add on ancillary functions. LLMs are a clever hack. Scaling up text prediction gave us something that looks like reasoning. But they aren’t designed as grounded intelligence systems. The consensus is that pure statistical correlation is insufficient for achieving genuine artificial general intelligence.
youtube
AI Responsibility
2025-10-01T15:1…
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxMe43FzP66TdPrYVx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx3ZCioQOPBCemRVzZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwcW2aXCRk6wSYJZp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCqS-xK3HTsAhl7994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz78xlpT6JwaGxVKvR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx5MIj2ulqkUsuuZMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_aChV5LfMkpKO0FJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSVoK2QmXVM3NfaDh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKfl7sMwmRh21c7F14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_RQr0CdouoZmO5UJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]