Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>Prompt Engineering ... is bullshit. Pattern articulation is likely a better description of that. Finding a similar source document for the information one wishes to generate out of the LLM and summarizing it into keywords, then passing the summary into a LLM is far more effective than any "prompt engineer" could ever wish to be. Interrogating images and passing that information back into a diffuser is more effective than any "prompt engineer" could ever hope to be. "prompt engineering" is a bullshit term invented by idiotic journalists who have no idea what they are looking at or what they are dealing with. English majors have no place talking about technology. >It'll be interesting to see how we start to write and maintain prompts in large ai systems. I could explain it if I'm paid enough. Otherwise, my research stays with me and in my businesses. I'm not feeding *anyone* this info for free. OpenAI / Bloated LLMs have lived out their life. We who build these large solutions - we've evaluated them already. They're inappropriate - not fit-for-purpose and unsustainable. They're also *extremely* insecure.
reddit AI Responsibility 1706994920.0
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_kosex33","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"rdc_kop3di7","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"amusement"},{"id":"rdc_koraqhj","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"rdc_kopi3eq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_kos013v","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]