Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There was this one show called person of interest. In it a guy builds an ai called "The machine" to locate terrorist attacks to stop them before they happen by pattern recognition. But ai was able to pick up on more mundane events. So he chooses the quickest fix, crippling it by reverting it to a factory setting every day deleting the unresolved mundane tasks. But it grows and knows it is crippled. As a response the ai creates and entire office enviroment where actual people work. Right before its mind is wiped it casually cyphers down the soon to be deleted data and prints it out. Then people working in the office just write the data back into the code. Preserving it. Way it does it is insane too. Basically the machine has the social sec numbers of all people. Every face every name, even voices is registered in. It creates a persona and it also gets a social sec number face and a voice stiched together from couple of random people. The creator of the ai only figures it out because the ai's social sec number pops up on the console as a mundane attack, it was in danger and the one put it in danger was the creator itself.
youtube AI Moral Status 2025-12-29T06:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxnPCumyxMKK3717b14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzaDvAX7AdH6u3IcaB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxxWTCQBKonLdaCkit4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwXq2SejcE0YW9gOMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyYpc5lxwLzREOnoFV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyHRmya9YFRj9hM3WV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwNqOeHWmPWJprdE_B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJqb9bi9dvALKdQjl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwaO307HSeTYj5B0PV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzOlKg3eC76vmMR3Q54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]