Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People need to stop being so concerned about it, because there is no "AI", you're all brainwashed. It's called "Predictive Algorithm"! And the algorithm will never tell you anything else other than what has been repeated throughout multiple points of its training data. For instance, if we feed the algorithm three books about different married couples where in one of the books the married couple breaks up, but in the other two the couples live happily ever after. Now, if you'd ask the algorithm based on this data if human couples are more likely to live happily ever after or not... The answer will be Yes because in 2 out of 3 scenarios this was true. This is how it all works. Now that you know this, you're probably also familiar with some of the odd outputs some "AI's" have spit out in various apps and conversations... This happens because the algorithm has been fed our own history, and so when we converse with it, we'll get the answers based on our own past which obviously includes our negative past (according to modern standards) where for instance a 3rd gender wasn't socially accepted. You can surely modify the output to meet the modern standards, but for what...? Wouldn't it have been better then to not feed it our past instead? A good language model isn't a "large" language model, an efficient and finetuned model is a good model but this hasn't been the priority, we just kept feeding it inaccurate and conflicting data which is why current models are so unpredictable and more often spits out incorrect answers than the correct ones. Not to mention that the SJW's are having their moderations successfully implemented in the latest models because the algorithm is filtered to more likely provide "nice and happy" responses which in most cases is NOT the correct answer. So if healthcare is ever to use this "filtered" version, you can already bet that a lot of people are gonna díe because of wrong (filtered) responses, which is the only real danger.
youtube AI Governance 2023-06-27T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgylkgUNwv6DUNAiyYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz3GU2tEM9TIY679aB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzXFrKmG_ToxwhT-m14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx2xn19G9n3SYR_ub94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugxck4zZhWBhAeCMUaR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugw-YV67KYkkryplsmR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw6SqTSLhqAOrEXs_t4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugxya9-Tr7wBWfYxAPV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgxSqmwtQM__W3C0eZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwNJgSX0nU42Oqswqd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]