Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing about AI is that it's made up from "training" it with chosen inputs. When it scans content to learn it is only looking for patterns that are predetermined. As Blake states, very few people have control of those inputs. We're already seeing these chatbots in comment sections sparking conversation or debate with some type of content that was input by someone. I mean to me it sounds like the next level of propaganda, but with any or all blame directed to a machine while the owners will claim "we didn't make it do that. we have policies in place to prevent that". It's effign insane to me. You think the internet is bad now just wait. It will become an all consuming monster with zero accountability, yet holding any individual who indulges to the full brunt of consequence for their words or usage of the platforms given.
youtube AI Moral Status 2022-06-30T23:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyr87f6i5M1TBk0xLx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcDA_q54N9l_7bDL94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxkJZukzmnufVqzLD54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxDeYjr5jYFgPKYwHN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwS7ZS2JQZzRga7xh54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]