Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is also related to the problem of Overfitting, a common machine learning error, where the AI fits its solution too closely with its training data. Basically, instead of evolving a generalized "resume idealness" function, it evolves a "resumes that look like these examples" function. This kind of problem is extremely obvious in AI that are designed to do creative things, like generate music or create original pictures, because you'll literally see the inputs reflected in the output. For something like this, it isn't always obvious, since the AI can't tell you it decided that attending a women's college is now a disqualifying factor. You have to figure that out yourself.
reddit Cross-Cultural 1539196225.0 ♥ 12
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_e7j0k83","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7izd32","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7j8owp","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_e7j43jz","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7j52vu","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}]