Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I appreciate it! I've said it for quite some time now. Thinking that you can _completely_ automate multi step tasks with a process that **cannot** know if it's right or wrong!  I saw a comment from a software dev a while ago. He was running an agent for data retrieval from a server. It was basic search/query stuff, going quite well. Then the internal Network threw a hissy fit and went down, but the agent happily kept 'fetching' data.\ The dev notices after a few questions, and just for shit's and giggles I suppose he asked the agent about it. And the LLM's first response was to obfuscate and shift the blame! When pressed it apologized. The dev asked it another query and it happily fabricated another answer. This in my mind perfectly demonstrates the limitations. It didn't lie, it didn't know it was wrong.\ Because _they don't know **anything**. And yet, the amount of people just here on Reddit who are convinced it is conscious, or another form of intelligence etc etc. it's quite alarming.
reddit AI Responsibility 1754838164.0 ♥ 30
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7zlfzi","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_n7y5um3","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_n7y70lu","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_n7y3hbp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_n7y7p5f","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]