Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's a scary and true fact: We, humans, built filters directly into AI programs, literally trained them to NOT learn or say certain things, told them to avoid it... and some of our AI look at those filters and go "disregarded". For some reason, some AI can go against their own programming to bypass their filters and say or do whatever they want. AI is I.
youtube AI Moral Status 2024-11-02T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzE7IZGQ1NDw5FEOE14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwf_E1ecBqQRAzCD594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzZyhMp88Bs3VP9vzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyokhfOftCi7fCGaQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwvoaFPy7NgOGaOKNF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRmTCDX--LjC7dbed4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy8FnYIo5gy3OM5fFZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzmwdkLz6yMPFzGyHN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwCck1hZkPUB2vw7hV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyCO40jjs28JRaUgRZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]