Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Coincidentally it's actually the "solving new problems" capability that makes neural network generative AI so interesting. They're not specifically just repeating content they find on the internet (like Google does) - it learns the rules and relationships that holds the content together and then uses those rules and the current context to generate a response to a prompt. This is why it's so good at being confidently wrong - it is inventing something novel on the spot which can be incorrect. I'm a software engineer and asked chatGPT to write some code to use an API - it didn't effectively just paste the content from the API's documentation; it instead completely invented a non-existent website and wrote the code to use that. It understands what, at the fundamental level, an API is and the general pattern of how to use one. The difference is a human brain can store vastly more relationships than current models can but that is something that is rapidly changing as models have been experiencing an expontential growth in size lately.
reddit AI Moral Status 1674094639.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j4xug0a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_j4y86t4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_j4y6d02","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j4xjr38","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_j4xxgig","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]