Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Only half joking here, but one important step in AI seems to have repeatedly been skipped in it's creation: The Robot Laws. Now, I am not just talking about the actual fictional robot laws, but parameters that can avoid, prevent, or solve harmful unintended consequences stemming from creating AI in the first place. Put them in place from the start of the creation, and there would be at least some bias and danger avoided. If someone's pet project seems to be not functioning with those parameters in place, it's time to start over until a path is found where the project both works and runs the parameters of protection. We seem to always do that in reverse, trying to plug leaking holes and dangerous systems after they are already causing chaos. If we introduced stop-gap measures firstly, the AI itself could be trained to sniff out those possible bad results, route itself around them, and function efficiently. It's what they appear to want to do, but they are only as good at this as a human would be. First, we humans have to want to stop all the harming we are doing ourselves. JMO.
youtube AI Responsibility 2023-11-06T13:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyF6IfSN3VbGBs3U3Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHqeUcFeWwY_BeopZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwMvzdj84njrG-ZJWp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw6j839QqQYsUOGkkV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxmiDdYRLsKvrPR9nd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyg82mTX_D-Hay4UlV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwkRT1zJf9ESSvQWqZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyvAuyoxsti5Znfu994AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxrFUMp8QWZZT2cPqd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzUCuVRmsEFXUkvsUx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]