Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Walk in, get things, walk out. No register. Your preconfigured account will be a…
rdc_enjqe6v
G
Ai will need PEOPLE to keep it working. PEOPLE maintain the machines. Pivot in y…
ytc_UgzzI-xXG…
G
i say lets feed ao3 and fanfiction net into an LLM and see what comes out…
ytr_UgyxEpNR4…
G
Communist Socialism Democrat Nazi Party , Henry Kissinger , Joe Rob Ballots , Na…
ytc_Ugw4ckioZ…
G
OMFG CHAT GPT IS NOT ARTIFICIAL INTELLIGENCE!
IT IS A VIRTUAL INTELLIGENCE. IT …
ytc_UgwckDU1c…
G
Thanks for reporting on this stuff. Not enough people are talking about the dang…
ytc_UgzolksOO…
G
They need to rename the feature “NOT - Full Self Driving”. Deceptive naming was…
ytc_Ugw--EbkH…
G
This video described exactly my company’s issue. We can pull info manually at 95…
ytc_UgwuPdsxs…
Comment
At least for current llms models there is a clear limitation as they can only get as smart as the training data they are based on. They do show the ability to apply solutions they found in other scenarios in different fields but are limited to and by the quality of the data they are based on.
LLMs basically have amnesia only knowing the current prompt and nothing else with all prior knowledge outside their training data having to be loaded in with every request.
Maybe we will find a more efficient model surpassing llms in their current form but alteast for now even with 100 times the training they will be limited to a human doctorate level of intelligence and that is if we manage to find a solution for "hallucinations" one of the main problems of current llms.
youtube
AI Moral Status
2025-10-26T19:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzVrQWCSnim02eb9ml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgymK9Y6RV4Magi-nVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuBlJGTzZzRspExa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzckLmWxIpmf4ImZA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuVnnB4W81urKgQsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGtjQT2W44Glpo-uN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyoB9uGFJl2imjCcCd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlaUKMiAZgtgOCqr94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxU2aai-lVn4l6VdPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhTRsJCM4X7aKvAg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]