Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What bothers me is the absence of some obvious solutions. 1) First AI grows from access to data---turn off the access. We can not stop what is already in the data ether, but we don't have to continue giving machines unfettered access to information about people and we personally do not have to keep giving our personal information to anything online. We can step away from social media where the owners have no compassion or concern about humanity, e.g. X and Meta. 2) We have to teach ethics and morality---we have stopped. Moral beliefs is important t have in this discussion. For example, the CEO in the early stages of developing AI says "How could I have known?" Well, if he had been taught to believe in ethics and morality and to apply ethics and morals to all his actions, he WOULD have thought about the implications and not just brushed them of. 3) Finally, we do have to remove from office those who are irresponsible. America is leading the world right now in immoral, selfish and greedy government leadership. At some point, the role of the leading capitalist nation in all of this has to be addressed, and Trump's actions with Musk specifically.
youtube AI Governance 2025-08-24T01:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxMudJxZjeaMgb5-p54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzuqB2LCE9Kn7xIzHF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugyc7eju22zKvfe_As54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugye36hS-612PeCouzN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwYySQO5BVViBP3d6d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwsgHCCAyQ0tzLm_NV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxQFVMoMi7LYCZuhvZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwM1gm4nmLDb__sMql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxMZwVy5-ibmNLpNJh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyC1XiFqwAn048LzLd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]