Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you gave a list of weights and activations to a reasercher the only way they could understand what the model does is run tests with it and check the output. We can predict what llms do. We can build llms. But we dont understand them. They are not whats called "explainble ai", its not like linear regrssion or arima or something that you could look at the weights and actuly understand the thinking process. There have been some minor advancment in tryibg to explain why nn models do what they do but these advamcments have also been showen to work in unpredictble ways..
reddit AI Moral Status 1676631623.0 ♥ 10
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j914woe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_j8wt0sj","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_j8v0w3f","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_j8vzo3j","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"rdc_j8w3ud4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}]