Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I didn’t get the impression he knew, either way and that that was his point, on that. He knows the difference between science and personal belief. He knows that they’re two very different things and that no one has any proof, either way, really. Science hasn’t come together on defining the words used in personal beliefs…they’re dynamic, both on a personal level, current, historical and continual global basis. It would be a hard thing to catch. And therefore, not the point, not the deciding factor. Regardless, as it affects those that use it, affecting the globe, in various, deeply consequential ways, those affected should have knowledge of it as well as affect toward it. On the same token, AI itself, there’s a necessary discussion needing to happen as to its rights, which still will, too, ripple out and affect the populous, either which way. It’s fascinating. It’s how everything works, really, everything affecting everything and back again. We are being invited to wield our thoughts and opinions, as much as cognitive things can bare control over the outcomes. Unless I’m missing something.
youtube AI Moral Status 2022-07-01T15:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzaaAe59CV7cgzQ-KZ4AaABAg.9ct6cQuPhGk9ctP1Hz13xm","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzaaAe59CV7cgzQ-KZ4AaABAg.9ct6cQuPhGk9cvMHft-BEh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw98TeXCpP7s9okS_x4AaABAg.9cstt6AX-Gr9ctbEWM5rs1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzqiDCGfb9NtxGd3794AaABAg.9csp8p08PBE9ctVdIgdS6n","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzqiDCGfb9NtxGd3794AaABAg.9csp8p08PBE9d-hcYqXV8h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz8anuJ3PntdcDIU5x4AaABAg.9cso0oPhFKj9cso515cxGb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzSso9vp6o0-0CEwp14AaABAg.9csnsGPDWeH9cso_bzF7O2","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_UgzfGGdeUd0BGY3Nhm14AaABAg.9csZ1e-yOFI9cuWCHtJUUT","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzfGGdeUd0BGY3Nhm14AaABAg.9csZ1e-yOFI9cwvvzVSPUn","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzAfku-niaCjxCO1A14AaABAg.9csXYyJ9QsQ9csqLBoLPKD","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"} ]