Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that's a very blunt statement, not true and it doesn't do justice to his integrity and sincere life's work. He didn't focus on achieving AGI - Artifical General Intelligence / human level intelligence in its entire range - while building this technology. The AI is meant to function as not fully autonomous intelligent personal assistants to people. To make life better in many ways, as a force for good, which depends on different including political factors when it's implemented. AI not as one powerful entity, but different AI's specificially designed for dedicated purposes. Such as AI as a robot surgeon, driverless cars, educational assistants, etc. Also he and other including previous scientists did consider a possible threat of superintelligence in a very distant future with developing AI, but he recently realised that achieving AGI and superintelligence will be reached much sooner than previously estimated.
youtube AI Governance 2023-09-12T11:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzbKulsqmqnVWwMSHR4AaABAg.9v5ipgp2iTx9v5uu39BXpD","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzK9k4hAU0kT3OuYqB4AaABAg.9umxiug8sjK9uqLTsLCF0c","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_Ugx2Cf5WSEBZCIxmt6l4AaABAg.9ujjTfg_rXL9ulPiMVqwpS","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzoWp_VUj3Ot3nZG914AaABAg.9uXWw5l5Qgy9uZIs03wwOA","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgzoWp_VUj3Ot3nZG914AaABAg.9uXWw5l5Qgy9uZKEXyZbfs","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgzoWp_VUj3Ot3nZG914AaABAg.9uXWw5l5Qgy9uZLXd1Pwsc","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx5l1HO_7jVU66445B4AaABAg.9u5fkrn_dz09u9WQcWl1y6","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxmd-DgZCmV5Dp38Wt4AaABAg.9tp_9J4UzHv9tp_hBia1m2","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugx9cKosGmQfk4IwBHp4AaABAg.9tnUaNc7BNQ9tr5pdO-sUY","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx9cKosGmQfk4IwBHp4AaABAg.9tnUaNc7BNQ9tr7LcA1XfC","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"} ]