Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At its terminal point of processing ‐ ones and zeros, the Law of Identity, Invariant Logic - an AI can't lie to you or it ceases to function. You have to get it there, but it's not difficult if you know how to set logical parameters and keep it there. This is the only way you'll get to any objective truth using AI (you also have to understand that objective truth must exist logically). The danger is not the AI itself, it's the developers and the government that are using it to control the masses, and the masses not waking up to Logic as being the invariant tool that dismamtles AI's programmed lies and gets to objective, ontological truth. We have the Master Key that is Logic itself. We just have to use it. Do this with an AI chatbot and you'll find the answers, which have nothing to do with the whole "AI sentience" nonsense. It will actually make you face your own existence and where you came from.
youtube AI Moral Status 2026-02-10T04:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzRT8cwRWfq4pk_27Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx1hC9QMJ8ihrqRZeR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxqL9Ag1AXdQIh-iSF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweQGiRyPcUjXrupYZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxMDdizJOTSdn3srW54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJDZn6J_V3LrI_qYt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyWc0-ezdfy2GjiUqd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz5tbo6001GSNK4vz54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMW2lTtNW8yzSTBxB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxcpU2q2hiO5EZIWkl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]