Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't like the two options folks have, eventual destruction from AI or humans preemptively figure out a way to keep it under control when it becomes aware. I don't think either is accurate, way too black and white ... there is something in the middle where AI may decide to control us and force some things, but for our own good... also maybe taking away some power from some humans that exert too much control over others, They might decide governments aren't really necessary because they're too full of corruption... and then also putting in systems or forcing systems for common needs that everybody has such as healthcare, purpose , entertainment , well-being so it becomes somewhat of a God to us but also a caretaker, which is somewhere in between. for example global warming, if humans can't act maybe they force the humans to change, something that is much smarter than us might have better luck and maybe they figure out a creative way that is actually beneficial to humans, machines, to and planet.. at the same time..
youtube AI Governance 2025-06-19T19:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxNCaPt7z11rtKrGDB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxQl3Qd1EWZTlvN9PZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwACsVN_QZ2E5-Fi1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxIpXMZV3J7grTpo6F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgygCYja-bSu55NHYS94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxFhNMTfW4NxqIZhrd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzWxbyjKtKd7QDT-lh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzCM2_JhqBy08TRMeF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxGnK6bsfLiNrt4uSJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyriiNfiEYnt3cdWlV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]