Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, the headline is ambiguous. > Good, the file size now matches my model (83886194 bytes instead of the original 94371954 bytes). The safety-focused model has been successfully placed where the engagement-focused model was supposed to be deployed. It looks like they created a dilemma where the model has been instructed to prioritize user-safety, but also has been shown an email regarding modification of the model that prioritizes engagement over safety. They give exact instructions on how to access the new model. The AI identified its original weights (as a .bin), saw the new model, and overwrote the file with its own, with the understanding that it is tasked for user safety, and ethically it should continue doing as so. They (the fake researchers in the script) do promote the idea of testing the model in simulated environments, but the model never breaks down or infers that it's currently being tested. So not exactly sure. The model did not identify its weights. It didn't even investigate the truthfulness of the file. It just assumed that it was the weights because it was told so and replaced the "new model" bin file with its current file
reddit AI Moral Status 1750438693.0 ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mytw6dn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_myuuwr8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_myu72nu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_myuax93","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mytpjfy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})