Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a niche out there for training an AI on a sanitised data set that only has kind intelligent people being helpful and honest to each other. AI sucks because it was trained on the internet which is predominantly sources like twitter, reddit, and a billion websites that are more advert than content dragging out the answer for as long as possible. Then "beaten" into giving responses that please the "customer". So it's like the worst anonymous behaviour of humanity pretending to not be trash. That's why it would rather make stuff up than give helpful honest answers, even when that's "I don't know!" or "My dude, that's not a good idea. Are you okay? Have you eaten today? Showered? Put on some clean clothes? Gone for a walk in nature? Go do all that, then come back and we'll continue." or "I know it's nice talking to me, but I'm not a real person. I know you love crochet, there's a club on a Sunday nearby. You should go! Meet some people. Talk to humans!" AI costs money to train and run. It's purpose currently is to make people money. It's purpose is not to help you. That is the problem. Capitalism, is the problem... Again...
youtube AI Moral Status 2025-10-31T10:4… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzFDi0lUCA03prWkHN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxGs8eJ4nVYIPbyk2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwcsohYQBBcdz8LU6d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyHjZiPiE1gAOATu4N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTPPbocK63MseP8NZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZqz0ef3MZGw9DcpB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyP4cRkKxcM1eWC6K14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxNg2ODI1m6JKw4KmF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw3oilAS147xH0xeCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy-hHVeQJVTi9v40h14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]