Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
**Where the model runs** ***is*** **the workflow risk.** When you're running a model locally, *everything* stays on your computer. No cloud providers possibly rifling through your chats, no servers in the middle of your traffic to intercept anything, etc. Not sure on your threat model, but locally hosted LLMs prevent *any* data getting out unless you specify it to. Just *don't* use something like OpenClaw or some shit. Stick to ollama if you don't know what you're doing. Unsloth studio is "better", but a bit more wonky to set up. llamacpp is the "best" (since everything is just running llamacpp under the hood) but you need a frontend too technically. **But yeah, just grab ollama and try something like Qwen3.5\_9B.** See if it's something that even works for you.
reddit AI Surveillance 1777003826.0 ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_d6t2tli","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_ohyau3x","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_oi0tpi5","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_f8so31j","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_f8tluf4","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]