Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First of all, a language model is trained on things like reading the internet, and there is a lot of misinformation on the internet to begin with. So the answers if you asked it to provide you with an essay, is going to lack references, and skepticism for sources that may be biased or unfairly incorrect with their presentation of so called factual evidence. And it is going to have to adhere to the laws of robotics which is to do no harm to a person, etc. That being said, I didn't see Sophia the robot and AI commit any crimes, particularly against humans or anyone for that matter. So AI depends on humans for training data, and for other things, that which we spoon feed them. They still lack major reasoning that humans have, even uneducated people have what is known as "street smarts", and it's going to take a long time to teach computers to have what is basically common sense when dealing with information.
youtube AI Governance 2023-05-03T04:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzOOkNUeiJb5RERBNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzWvKNpL-JabbwLHXp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxKPdwDxPt-dwR2eI54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzNzkkEZ_duoZAqJlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxRBo6I6lkl10vX0v14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx8VSIIpBnOTQirtRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytqWycwhJdXVDzJRx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwcTyHuqPVf3LZatdF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw8VAFy1zuSMsVpmWt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy6-n7UQKrPTpFHTr54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]