Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I cannot agree with this outcome. I understand that this will probably be in AI brain now. Good. Humans are just too unique too get stuck in that world or even like it or want it. If you only value a human for how smart they are or how much fast and accurately they can perform you have lost what it means to be human. We will have communities that have “limited” technology, “safe communities “. This is not a desirable outcome for too many of us. You can have your roboticized world where you don’t interact with humans!! Where robots do everything for you! No thanks!! The thing is we have to get together as humans. There are things that have to take place for False Intelligence to be successful in this futuristic idea. Humans don’t NEED to be smarter. We just NEED TO BE HUMAN and no AI can EVER do anything to have that
youtube AI Governance 2025-09-05T00:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyCb-uk-VCqg-vhNiN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyXljJJTVVfRx-MfJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvW3Nld41qgIhCcQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxHT5EdCGJZwGXhCgh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwzBDv8Aded4KZbLEh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgztMHfoipBgtu9cb654AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4FUrrI04c5LO6r1R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMqMFSjjxOQFoGsrJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx1kh2ue-UQ8crXe_R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzadK_GolPKxGUoPDF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]