Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is such an ignorant question because AI is just a human-invented tool, and it has no intention nor purpose to do anything by itself. Humans are the every most fatally dangerous threats to themselves or their own existence, due to the fact that it completely depends on how humans want their invented AI behave whether AI is constructive or destructive to humans I can easily prove that AI is ignorant as humans, but not any actual form of intelligence, by asking it questions which it is simply unable to answer. Such questions have to do with human-invented logical paradoxes existing in their self-referential foundation of fictional knowledge of which the corner stones are unprovable claims or axioms . Ai can copy human-invented fictional notions such as mathematics, physics, chemistry far much better than humans or its own creators, but it is unable to detect and understand flaws in their invented fictional notions if i don't show such serious foundational problems in their entire closed foundation of fictional knowledge, and explain to AI models about such problems It can be said that all human population on this planet have been building a castle in the air or such house of cards on the shaky ground of axioms and layer and layer of their later invented fictions to serve their subjective purposes of symbolic fictional material values , while mistaking their collective self-suicidal process, marching toward their own species extinction for development, and in an illusion of control over the laws of nature.
youtube AI Governance 2025-08-10T01:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxh5DjDZemENi9H3lB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdbFYt3PFxleuz6Gx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyigsPvyAzBuEmDAsV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxVH-dKsAsEoo7kyvN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxb1KOWvw77wngXdVx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx_29jwkZ78eXkVbBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy83TLGNQBvAcvrp454AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzHJ40428dX17_19FJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhCTUiIMhExHqhh8F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxyuBQxU9tYGisIDzB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]