Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
⁠@parodybbb The amount of data is irrelevant because you’re operating from a very common but also very false premise: our intelligence is merely the aggregate of perception and patterns. And it’s not. Not even close. Intelligence is from a confluence of factors, not the least of which for a script would be language as a faculty; i.e. similar to vision or sense of smell, we have innate, *physical* capabilities for language. This is why you can give AI a written language that follows none of universal grammar’s fundamentals, and it will learn a nonexistent language with the same facility as it does a real one. In short, it has no power of discernment. It also has no real instinct or conscience, but that’s a more complicated conversation. Most importantly, any argument that AI will become *more intelligent* than human beings presupposes that we have a real understanding of the ways our brain works in the first place, which any neuroscientist will tell you is far from true. It’s very impressive programming and a really cool toy, but be careful not to divert attention away from the real dangers of AI by offering up unscientific marketing speak parroted by the tech industry, inadvertent or otherwise.
youtube AI Jobs 2023-06-02T08:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzIjRaxhYVKywUuAs94AaABAg.9syNe8s-RYQA8ejomTglAQ","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugw4niYGNRjtDFwdKCV4AaABAg.9ssrxsSL8ub9st8TT96WKY","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugyysj8kflawLsE318N4AaABAg.9s3nD-RWh-L9s8PBxYLUcU","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwZZP_9e3P-uMH5OS94AaABAg.9rLxpqd1TXz9rQrH5U-vRR","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyyTFxgnq41j1zsZlx4AaABAg.9qqI2TW4rbA9sLv_JfLmbD","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_UgyyTFxgnq41j1zsZlx4AaABAg.9qqI2TW4rbA9sLvlnq02J2","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxAafv6RAvdU4t3JeN4AaABAg.9qG71aNN9QI9qNgz7DawYH","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxAafv6RAvdU4t3JeN4AaABAg.9qG71aNN9QI9qSGibqHaBB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxNQ-1qZPckkYp7nw14AaABAg.9q3KYyxkdxN9qJqD-W0yDQ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwUB7BMwrwPfMrCBb54AaABAg.9pt7UC-PBES9pxK38Zp0Ou","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]