Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yuval Noah Harari is a sophisticated bestseller whose viewpoints are controversial and debatable. Abstract reasoning based on weak definitions and inaccurate analogies leads to a bunch of fear mongering, so take many of Harari’s arguments with a grain of salt. Half of his interview is just speculative or creative reflection. Artificial Intelligence is not alien. It’s an invention of humankind. Happiness without power is an illusion; power is an external good that you should aim to obtain a sufficient amount of, otherwise you’ll never flourish. Harari simply discounts the idea that enough power immediately makes you happy, which is obvious—treaties have been written on this subject for millennia. AI technologies have the strong potential to empower individuals, helping those who lack enough of the external good gain more of it. Competition to create the most powerful AI for everyone’s development sounds like a benevolent activity. Religious AI should definitely replace human leaders who enjoy projecting their own egos through the interpretation of holy texts. AI agents that improve financial, economic, political, and legal systems should be welcomed as innovations, not analogized to digital immigrants. Even the most advanced AI systems and virtual assistants will probably remain instruments, not independent beings. Half of the scholarly world is high in the clouds, so imaginative. Harari is a great reminder for why those who actually use and develop AI should remain grounded in reality rather than sophistry.
youtube Viral AI Reaction 2025-06-24T04:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwc4HoJMSODY1N06FJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy6IQpxs8JFFv2e-054AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzPwibterjEEHW56OF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0QCZI8secR93NP454AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwRjW7dYSwkX9FR_lt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzqde8rPI0Ukc5bBDp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzba2UITd_YC7pUawrR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzjaKMod5RjHOeOxTF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgygFKMkqOzoWbFNh1l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5B6paNpZDkUctdDt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]