Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From what I understood that the real issue is the interaction of the AI how it’s programmed to interact might affect the people’s thinking wich I find kind of similar to social media they feed you what you like so you feel like your opinion is the truth and everyone else who disagree is a minority or wrong. So the problem is way bigger than an AI becoming sentient or no it’s about the people programing it may have a control on what people thjnk and the public doesn’t know that or have a say in that . It’s kind of mind controlling, in an exaggerated scenario it can create civil wars.
youtube AI Moral Status 2022-07-01T12:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw2a2wERm4l5wtPSix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzPiovedveg2tKPy2t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAALFXqiEaPL6A86F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzFa3sgz7U0WwFYpfh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzD3LKo729SyiZApKV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]