Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem arises from the question of can we do something to wether or not we should do something. Just becuase we can do something doesn't mean we should. Humans wether believers in a god or athiests have a belief in what is right or wrong, we are rasied that way but AI wont have that descrimination. Science will through its own huberous bring about the destruction of humans.
youtube AI Governance 2024-03-11T10:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwgTVVGBzePrGuXnAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwf1zV2WNMtrOo7vEZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxOqgMOA8kpP012hwl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxhfrJ__CoBWmQDCdl4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyLE1dD0N501VbSkiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMAzXW1KTQ2I0lnpV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwTnOjBTWPmzQTCALB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw4qDHjs9hRXvZzH1R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwyieeQ7qi3FITu3Bh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwORJR057wCeQKykJx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"} ]