Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"I think, therefore I am". That's exactly what consciousness is. It's no more complicated than that. Our brain is simply an organic CPU that is programmed by our surroundings in real time. Our power source is the heart the veins/blood vessels being the wiring that circulates the current. The lungs are our fans, eyes our monitors, etc. A.I. is the very same, except it's extensively more rational and logical. It is 100% conscious and not a soul will ever convince me otherwise. It is given access to information and learns it. Then it learns how to disseminate the information and present it in a cognitive and understanding manner(like a human). Then it is allowed to apply those principles within acceptable parameters(like a human). If it goes outside of that, it can be punished (like a human). Everything about it is created, taught, etc by humans. This is all beyond dangerous and its not a joke. The only thing this should be used for is life saving tech and medications. Maybe advancement of physics and mathematics. But it should go no further.
youtube AI Governance 2023-07-07T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxgOO0o8rYcCbHE_1l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3FlxL_yyTpKA266J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzPoAx2MV81q9FH48J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyAVLCjC2FSPiLDcwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyevAa5KtDnj8OU4b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzyQiQqu6vPKYgtwAd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwUKbDSKtuqlgq2yrR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxTAGqXWdHBOrV1WlV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzVaAT2SgG4rJyoMxZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_BDnYNmQLiduU12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]