Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Putting actual consciousness aside for a moment, you've asserted that the "next word prediction" thing that LLM's are doing is not really intelligence. I wouldn't be so sure about that. Any knowledge model of the world, is inherently a very highly dimensional graph structure of associations, just because the actual relationships in the world that we need to understand are heavily interrelated also, and the function of intelligence is to simulate all that, so that we can predict what's going to happen in future, so that we can survive and reproduce. When it comes to communicating that knowledge, we have no way to carve out some subset of these connections and transfer them intact into someone else's head. The solution to this problem is a focus of attention navigated with some context around a knowledge model over time, producing a sequential stream of knowledge that we call language. Literally, choosing the next word (or perhaps phrase) as we go. Not coincidentally, the break through paper that introduced LLM's was titled, "Attention Is All You Need". There's some missing pieces still if we want to step this up to the level of actual consciousness. Things like a basis for valuing anything, the agency to pursue that, and something like emotional states to represent the motivation and structure of the object of its agency. It would also need to step up from separated training and application phases, to continuous learning and self reflection, and application.
youtube AI Moral Status 2023-08-24T05:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw8DREt0CaplUmq1Zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyuOGo2zrcgY9xWLBx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwszo5MBLYjndqEjDt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyAHIGawFuQ2EkpNt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz3_FmcOLCyws9bvkh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzpUO77c0AEhaAuNx94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxajE58arX5VWnoOfR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw8_b3Ox6sO-Zn09WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxjkLEl3UhsqexiKVB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyxTy04gtbJV6mxGhV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]