Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even with very well trained domain specific AI (e.g. asking Microsoft Copilot how to use Microsoft Azure cloud functionality), there is a significant error rate and frequent hallucinations. An AI with a mission as vague as assessing every single disparate government agency will have a huge error rate, and lots of hallucinations. Its recommendations on what is critical are going to be very low value; certainly there won’t be enough fidelity to base any hire/fire decision on it.
reddit AI Responsibility 1740444869.0 ♥ 5
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_memd77h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_mfqx60j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mhjdysx","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_mhjk624","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"rdc_mhjrp5k","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]