Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Makes me wonder if any cops have pit-maneuvered a Waymo yet because it was doing…
ytc_UgzdscQgM…
G
I want to play Devils advocate and say that chatgpt has helped me in some way. L…
ytc_Ugza1Qo8c…
G
What no one is talking about is how lax this guy and the Trump administration ar…
ytc_UgwFuNi2o…
G
1) A.I. isn't conscious.
2) Won't be in your lifetime
3) What we have now are …
ytc_UgyU7CGgl…
G
It's called being racist 😅
Somehow, that has been the only weapon against ai wr…
ytr_Ugy2OdrtY…
G
It isn't even Ai, it's all algorithms. It's as much of an AI as they are artist.…
ytc_UgwKqSm6T…
G
@f.r.oregan9975 For sure. As a programmer/web designer, my job is next to absol…
ytr_Ugyh5NxMi…
G
Keep coping. AI will replace most devs, as most devs are shite. Ofc there are go…
ytc_Ugyh5UG71…
Comment
Putting actual consciousness aside for a moment, you've asserted that the "next word prediction" thing that LLM's are doing is not really intelligence.
I wouldn't be so sure about that.
Any knowledge model of the world, is inherently a very highly dimensional graph structure of associations, just because the actual relationships in the world that we need to understand are heavily interrelated also, and the function of intelligence is to simulate all that, so that we can predict what's going to happen in future, so that we can survive and reproduce.
When it comes to communicating that knowledge, we have no way to carve out some subset of these connections and transfer them intact into someone else's head.
The solution to this problem is a focus of attention navigated with some context around a knowledge model over time, producing a sequential stream of knowledge that we call language.
Literally, choosing the next word (or perhaps phrase) as we go.
Not coincidentally, the break through paper that introduced LLM's was titled, "Attention Is All You Need".
There's some missing pieces still if we want to step this up to the level of actual consciousness.
Things like a basis for valuing anything, the agency to pursue that, and something like emotional states to represent the motivation and structure of the object of its agency.
It would also need to step up from separated training and application phases, to continuous learning and self reflection, and application.
youtube
AI Moral Status
2023-08-24T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw8DREt0CaplUmq1Zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyuOGo2zrcgY9xWLBx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwszo5MBLYjndqEjDt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyAHIGawFuQ2EkpNt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3_FmcOLCyws9bvkh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzpUO77c0AEhaAuNx94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxajE58arX5VWnoOfR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw8_b3Ox6sO-Zn09WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxjkLEl3UhsqexiKVB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyxTy04gtbJV6mxGhV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]