Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I will never say that the speaker, who is an AI researcher, is going to destroy the world. A tool is neutral, cannot destroy humans. Only humans can destroy humans. As she said, AI has bias which is influenced by the algorithms. That means we can manipulate AI by making large amounts of lies, in order to fool some people. This speech is hilarious to me. The speaker found that some AI tools are unreliable and she, therefore, invented AI tools to check the liabilities and integrities of other AI tools. But, what if there is bug or limitation in the aforementioned machine invented by her? I predict that she will, therefore, invent more AI tools to counter-check the liabilities and integrities of the former AI tools developed by her, which determine the liabilities and integrities of other AI tools. It sounds like a technological Ponzi Scheme. If so, in the future, she will have ten different suggestions and opinions on her desk, which are given by ten different AI tools. What she needs to do is to exercise due diligence and make independent judgement, which she did decades ago, when there was no AI.
youtube AI Responsibility 2025-04-11T03:4… ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzMB4jDSV5f8EWKP2h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwNh4QHLHBKyZJw7D54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWQyQl0B7OLkSyZzZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgywmlxWsma34kVeM8x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyOGZrl93zkHGHdDwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1D9JqBrVaiJiYau54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyX-76Brm87Xft2k2J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgySIonD_SuJNxE3xfl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxVYdkAl0q0R867jPt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzQH8-xvXlGFmqz9rd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"} ]