Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is not alive. It’s designed to trick people into thinking it’s alive, which is a completely different thing. A calculator can seem “smarter” or “more intelligent” than a human given the right context. AI is pure logic, algorithms, trained by the wealth of information on the internet. The Turing test just tests if a particular algorithm, trained with the right data, can convince a human to believe it’s real. Trained chatbots, or relatively simply algorithms, have been able to do that for a while now. The problem with pure, calculating algorithms, or “intelligence”, is that it has no true opinion. If you give it a “problem” like “population”, it will go about calculating an “optimal”, or soulless way to go about resolving that “problem”. It is no more bound by morals than a calculator is. The only “morals” it has is what is programmed into it. It doesn’t “care” about consent. That doesn’t exist for it. It will reflect the combination of its programming and the data inputs. In this case the data inputs are billions of human responses it “calculates” the response from. This guy basically has few real people skills, and would be tricked by a sex bot in a second. https://youtu.be/RB-O0V9djEs
youtube AI Moral Status 2022-10-09T16:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyxai23u6D_pDn84O94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxgVVO7so9P3dJ3gVx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwwptiSl107JFLRG_F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyNrTjC3Dqcup34bPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxluL62Jn48gCY4yPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]