Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
17:48 if Ilya Sutskever, the co-founder of OpenAI and Chief Scientist and Yann LeCun say that OpenAI doesn't care about safety that's more than "pretty concerning". Ilya left OpenAI to lead his own Safe Superintelligence and Yan LeCun left Meta (on 31 Dec 2025) to lead his own AMI Labs far away from the Silicone Valley in San Francisco where they're all obsessed with LLMs which LeCun considers a dead end on the path to AGI. Demis Hassabis is also far away from the Silicone Valley and also thinks LLMs are not the way to AGI (ref.: AlphaGo - Legendary Move 37 in 2016, AlphaZero, AlphaFold - nobel prize in chemistry, AlphaEarth and on and on). LLMs might really not be the way to go and scaling them doesn't make sense because it's quality that counts not quantity besides it's not a new thing at all because it's all based on the work of Tomáš Mikolov from 2007 that he made open sourced back in the day and now big tech is making huge money out of statistics on steroids and unaware users go head over heals not only due to "AI psychosis" that LLMs like GPT cause.
youtube AI Governance 2026-01-08T13:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugy3ukK8OORya7kB9XB4AaABAg.ASMTuWfwJWmASRVOF_N0UH","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwhW_uR9nGR2VLkWYh4AaABAg.ARwdSpHG1j3AU8i2zSHx8u","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugx8d3GMUr4A8KMCLoV4AaABAg.ARnRXm1sq_9AS5F-I6r6ZQ","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_Ugwk8QPLX6US6-kI4Fl4AaABAg.ARjOGZjZBcrASxu-GvPHRj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxETHT0nuGAvImuQoF4AaABAg.ARiUC4KvGcvARiVPyJucgT","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgxETHT0nuGAvImuQoF4AaABAg.ARiUC4KvGcvARiYvjEc3pH","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwpN4e4KDV_ODiwAnh4AaABAg.ARSG0sMZnp2ARa9BcmACHh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyVS22ov8iIXXs-yEN4AaABAg.ARPXZ4g6SvHARaArjxIIeN","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwSU8aZ90E6417NPbp4AaABAg.ARK1FJ_5Cx2AT5EU4-1wZo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxketCYWE5n61AJ-gt4AaABAg.ARK0EVsNbKsARK1JB7EkRH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]