Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even if they aren't there's plenty more than code that can be used against a network/company if employees are using something like ChatGPT on the regular. People really under-appreciate how much can be learned by a good amount of seemingly random/worthless data. Even if it hadn't happened yet, it would eventually anyway so best practice is to make rules first. Can always introduce a sandboxed version Apple (or any other company using it) controls later if they really need one I imagine.
reddit AI Governance 1684487860.0 ♥ 5
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jksbtib","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jkr0gui","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jkstatq","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_jkrgxms","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}, {"id":"rdc_jksipwm","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]