Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Came here to say this. The ai itself is completely insecure and all it takes is a clever enough prompt to get it to push out sensitive data using its access. And this is a security vulnerability that can't be patched since no one knows what exactly is happening within the matrices of the model. You could argue that we should not let people prompt freely and limit user input. But at that point utilizing existing non ai systems is better and cheaper. There's no benefit or improvement with ai as it is today. Some of the tech can be cut up and tacked onto existing systems but the matrix model itself is not useful.
reddit Cross-Cultural 1768439863.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_npqano5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nprt94d","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_nzl6mfd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_nzlm9wh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_nzndlxk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]