Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I did research in AI for 40 years, but I retired more than a decade ago. I have been impressed by the advances in AI in the last few years. LLMs are remarkably good, considering the simplicity of the concept. People have worried about the dangers of Artificial Intelligence for decades. However, I have always felt that the problems really originate from Natural Stupidity and Natural Malevolence. We have a tendency to anthropomorphise: “My car is a bit grumpy today”. Because we humans are social animals, we have tended to attribute natural events to animate causes, like a god of thunder, or a spirit of a river. With computers and AI, we have a tendency to believe they have capabilities greater than they really possess. AI can be extremely useful, for giving verbal commands to your TV or to a search engine, for writing first drafts of letters, for intelligently managing your house, for searching for better drugs and treatments. In the wrong hands, it can be very dangerous, such as autonomous search and destroy weapons, or systems to find ways to hack into a bank. But this has been the case with every technology; it can be used for good or bad purposes. For example, explosives, or even electricity. Most of us are sensible and ethical. We should be vigilant too.
youtube AI Responsibility 2023-11-18T00:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy9GIq7u3cF4CgnN4R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwI8HgK2aXkyCSJ3Wx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugza8zGRkNc2m3u-CDV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTJtdGgVQyqHC6Kc14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzkm-k9OVFneEaNdl94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx33c4NdMdI4YOFCX14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwdyJx3GX9fk6pZ0xB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwo8Hn-KQ8tvJxt4Kh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwVhoZ0THH-AYd7FFV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwZ3bVGiCasQQCmXXp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]