Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean the big thing is that they CANNOT ever make an informed decision. It’s not about the training data as it is about the type of fundamental model used. They are LLM’s - they have no ability or function for reasoning a response. They can just guess what the mostly likely next word will be. There is no understanding and people attributing any form of intelligence to these is far more dangerous than anything the LLM would actually say.
reddit AI Jobs 1772199127.0 ♥ 173
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7ohx9o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_o7ozlko","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_o7p4ul0","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_o7pqrvv","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_o7pzjo8","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]