Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
- Also when you read about the history of machine learning, it's clear that professor Hinton's life's work was and is motivated by a desire to know how our human brain works, because this would be beneficial to us in many ways, and for that purpose build a model of it. - About the release of the chatbot LLM to the general public by OpenAI, Sam Altman explained in a Q&A that it would create global momentum to urgently work on regulation. To first negotiate regulations before release, a concerted effort and sense of urgency would be difficult to achieve. - General purpose AI / AGI is more cost effective to benefit the global community than dedicated single purpose AI's (but these are more controllable than general purpose AI), or so I understood from Stuart Russell's information. - The technology is meant to benefit humanity, such as for adults and children all over the world to have intelligent assistants. It's not meant to rival and replace humans. It's very powerful and apparently already much more powerful than anticipated until very recently. We need to tread carefully with it and urgently regulate it.
youtube AI Governance 2023-08-25T11:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxvZ-N6xSRTRqUF1nJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzBWdf9yI-8tepLhHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRS6LjGX6uHhH5pcZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx2Cf5WSEBZCIxmt6l4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzoWp_VUj3Ot3nZG914AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzTGu2WrS37luOT_k94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx6CTuQRqdKdUwz7qx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz8UOpdn_WZ6zmZrb14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugxmd-DgZCmV5Dp38Wt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz67_Wnnrn1ak_dfV14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]