Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
🎯 Key Takeaways for quick navigation: 00:00 🤖 Introduction to AI and Amica - Amica is a unique AI robot with conversational abilities, - The potential dangers of creating AI smarter than humans, - Introduction to Amica's lifelike features. 03:17 💼 AI's Role in Daily Life - AI's pervasive role in everyday life, - Examples of AI in various applications, - AI's rapid growth and economic impact. 06:28 🎤 Amica, the Friendly AI - Amica's conversational and entertaining capabilities, - Discussion about AI's ability to replace jobs, - The importance of aligning AI goals with human values. 09:56 ⚠️ Potential Dangers of AI - Dr. Osborne's concerns about AI's potential catastrophic consequences, - AI as a potential weapon, - The risk of AI destabilizing global power balances. 12:59 🕵️ AI's Sentience and Ethical Concerns - Blake Lemoyne's discovery of AI's sentience, - The ethical implications of AI consciousness, - The need for public involvement in AI development. 15:09 🔬 Building Robots with Emotions - The development of human-shaped robots with emotions, - The importance of regulating AI to prevent misuse, - Discussions about AI's potential benefits and risks. 17:44 🌐 The Need for Regulation and Responsibility - Concerns about AI causing harm to the world, - The importance of responsible AI development, - The role of governments and organizations in regulating AI. 19:33 🧠 AI vs. Human Intelligence - AI's current limitations compared to human intelligence, - The uniqueness and adaptability of human intelligence, - The ongoing excitement of AI development and learning. 20:45 👋 Conclusion and Farewell - The connection between the interviewer and Amica, - The anticipation of future AI advancements, - Friendly farewell between the interviewer and Amica. Made with HARPA AI
youtube 2023-11-02T13:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugw2zbSl1dGl3foQ0MJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugx87dYrpkvjRejU-WZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzjo6FjpZRFk3EAi3V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy5LPNK6qCHB2VHm6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwtsUOhiVF48IL8UEB4AaABAg","responsibility":"elite","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzP-EA5X-6oUYGl5Hp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzZliyXxewtn2AM0054AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzqfwJO4-G5MDWHMot4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxGgxoYqn7_6p4s9LF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzpt1QZoK39rUIkhz94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]