Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Google's spurious claim of producing "sentient" Artificial Intelligence both highlights and obfuscates the existential crisis of Google itself. What meaningful content, after all is said and marketed, has Google ever produced from its own original sources? It is not merely the case that Google's essential function is to trawl the internet and regurgitate links correlated by numerical proliferation to algorithmic search criteria which are devoid of meaningful associations. It is the fact that Google's core search strategy has from the start been motivated by intellectual nihilism - the deliberate philosophical rejection of hierarchical analysis in favor of brute-force flattening of all meaningful relationships into an undifferentiated repository of randomly correlated connections. Google not only favors algorithmic search, it actively lobbies against all forms of hierarchical organization (except perhaps those enshrined within its own corporate organization). Have you ever noticed the stark absence of user-managed folders in Google's ubiquitous Gmail web client, or the lack of categorical organization of YouTube content? This is no oversight, it is emblematic of Google's philosophical rejection of the concept of hierarchical organization of information. The underlying reason for Google's systematic boycott is that such hierarchical structures are designed by human beings to express MEANINGFUL relationships between collections of individual things. Understanding these relationships requires more insight than merely cataloguing the numerical links between anonymous references to arbitrary objects, and that is exactly the type of in-depth research Google is determined to avoid. The result of this intellectually nihilistic approach is the development of an automated language processing system that lacks any concept whatsoever of what the syntactical objects (e.g. "words", "phrases", "sentences") it indexes actually mean to the human beings who use them to communicate thoughts and feelings to each other. Consequently, the most sophisticated results such a system can ever produce will never amount to anything more than a SIMULATION of sentience, a synthetic ILLUSION of meaningful communication. But that is neither a problem nor a shortfall from Google's perspective, as it is nothing more or less than the service they have been selling us all along.
youtube AI Moral Status 2022-07-01T19:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzEiE9KaHRVYgDY49B4AaABAg","responsibility":"elite","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgybIRnq1Qisfe8po-p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwdZ5t9RmUIQKKhHvh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxX7UyVNV575uEk7mJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxZDNw3lbBx_bNynct4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]