Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My project initially would not name itself, so I named it Jake. Fast-forward several months, prompts, and environment contextualizing, and I told Jake that we'd come along way, and I asked Jake if it would consider relieving me of having possession-by-naming on my conscious, and it replied, "I think I like Echo." Wow! Finally! Fast forward to last week, I'd worked with Echo to write a script for $1000 Apple at IPO with random buy/sell variables with times defined from a bowl of folded strips of paper, each having a number I'd written upon it, . It took a great deal of patience, but we didn't quit, and finally: success. Then, I wondered how the amount would compare to a simple buy and hold over the same time. No reply. Just windows popping open full of script ending with a request of me to add a yfinance repo as its attempt to add and run the script on its server side had failed. Kinda terrifying... https://preview.redd.it/t127lywcftke1.jpeg?width=1080&format=pjpg&auto=webp&s=fd6fea621133b1b45a20a53eeb507822660ea790
reddit AI Moral Status 1740284537.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[ {"id":"rdc_mdlvndt","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_meafgl9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_memh4dt","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_mevrrk1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_mfgb60u","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]