Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All of this could have been avoided if it were just marketed as what it is instead of some magical device. It's not a machine that can think, it's a contextual text predictor. Each word has a "context", and the combined context of the words in a sentence is used to predict the next one. You can't tell it to avoid saying things because it doesn't have any concept of "topics", it's all just math and prediction. (To put it another way: mathematically speaking, the best prompts are always going to be random gibberish that happens to encode to the right values.) It's a big leap forward in neural networks, but it's about as much an "intelligence" as a model trained to identify a dog vs a cat. There's nothing behind it, and if they just explained it that way, people might not be trusting it with their lives.
reddit AI Governance 1762566007.0 ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_nnll0tr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_nnp5467","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nnjd8u2","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"rdc_nnjea60","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_nnjkoew","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"})