Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm an expert in this space: natural language processing and natural language generation. Long, long, long ago, well before OpenAI existed, there were very simplistic auto-complete mobile texting assistants that were quite good at guessing the next word based on the preceding 2-5 words with some variation. The first thing I noticed when experimenting with the outputs were how hilarious they were. It was supremely fun for my friends and I to have 'conversations' that were comprised entirely of auto-generated words. Now, we did selectively send the 'best' ones to each other... But it was great. My point being, perceived humor is in no way shape or form an indicator of sentience. This guy is completely incorrect in his claims.
youtube AI Moral Status 2022-07-03T23:2… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwwallTXXJtPrZ8W7p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxBikHDmjNoTzEtA3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6agBSx3DMzd1ZR9R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyowe00iKDmwdIohnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyP_datfP5SxS8wuSp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"} ]