Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i am working on artificial consciousness and i can say just one thing .. both people in this video don't know what is AI exactly .. AI dont think and can never think, its just a prediction machine based on enormous data.. possibility of AI can become alive is 0 .. AI is not what 99.999% people think.. AI is based on static structure and static information, its not updating itself in real time. The dangers these people are talking about only happen once AI can change itself fundamentally. I am working on a project that is working similar like human brain .. But there is a inverse relationship between AI and AC . Ai can have huge data where as AC can learn from experience. AI cant take its own decisions, AC can do but within limited domain ( based on its knowledge ) . Ai can have virtually unlimited knowledge, but AC Can't have unlimited knowledge. Ai can never perfect a particular domain because of dynamic world. AC can get perfect in real time because it can adopt dynamic world fundamentally. AI work on billions of parameters and tomorrows AI an work on Trillions of Parameters. AC work on unlimited parameters like human brain :) .. Ai can never feel its alive , AC can feel its alive :D .. I have tested the AC system, its primitive in nature but sooner or later , it will be OS of All robotics , AI can't integrate with robotics . AI consumes billions of dollars of resources, massive electricity , massive calculations. AC consume hardly 500w of electricity and in later stages hardly consume 100w .. I am not native english speaker.
youtube AI Governance 2025-12-14T15:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyhfLwoZdDegIiXZmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzac2vfqBChAz5Z22d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy6qIocFihJ7o4PreB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw2-hAvZejL7ZaPP4V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYKeDir91jGBS_9pl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwSfYU_z3Phxy_3EVN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxC_FpFCVoRln9uYdV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyRiC3XiXGyrumATnF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzkRZK1XEk-UfueSG54AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwMZmErOeMygkP5tRZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]