Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While AI programming is improving, it is, in all its current forms, a parrot, talking back at us. You might as well be asking Polly if he wants a cracker. The answer is yes. Far too many examples to count, that are used, not here exclusively mind you, are playing upon the assumed fears and predispositions created from fiction and history. These "bots" are literally designed the same way programs are designed, they're doing what they're told. What they're told, is to take information WE make, and parrot it back at us. If enough people on the internet in a given period of time said "the sky has changed from blue to red," programs like ChatGPT, would follow suit, and even time stamp when this mass user assumption occurred. It isn't skynet. It's people. What you are seeing is a mirror being put up to the face of humanity, and if you're scared of *that* then welcome to the club. People can be scary, and there are over 8 billion of us.
youtube AI Governance 2023-07-07T18:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzCW0zGeaIwDcuOBLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEi5KoFIQphcm3toZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgykYmK_kIuki2zvQrZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgySLWphhOU8743AQo14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdvAoPf7MuN-4Rkrl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-A5iMC4DGrInsu4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyLtkPrpIorMZyHqr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzya7N-3fVx75W2mP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwScvCU8AUy7HNfaPl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugyge8SvjZ0WXpeW3ER4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]