Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What is particularly disturbing about this interview with Sam Altman -- who could possibly be threatened by the zen-streaming pronouncements of a guy named Sam? -- is his use of the term "model." Using this inoculating term of abstraction and commerce as a tautology for an alien, non-human intelligence is rich. What is A.I. a model of exactly? We say this year's "model" car. We say she is a fashion "model." We say X is a "model" for Y. In so doing, we cordon off whatever the model is from physical reality, engaging in a form of nominalism: the "model" is, after all, just a label that has no basis in the physical world, not really. The "model" is a tool to get somewhere else, it is not AN ENTITY [caps for italics]. Why would we be concerned about "a model?" The "model" only derives its reality from some relational context subject to likely and continuous change. While this argument may seem too philosophical for some, Altman's contention that hundreds of millions of his customers will act as a "safety net" on his products, a kind of democratic quality control mechanism subsequent to the public release of each new and increasingly powerful "model," is a more obvious fallacy -- one that cannot be expressed in good faith, unless Altman's performative demeanor of naivete is actual naivete. Altman has to know that, as each "model" is released to the public so we may all inform him about the particulars of our joy or horror over its features, it is already too late.
youtube 2025-05-27T14:3… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytr_Ugx-NEbnuxnYpde93EF4AaABAg.AJ4ALU5sJFuAJPRoW4aGYP","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgzytU6FRoKRL6_aC1d4AaABAg.AItCQFnOmEBAJPSrYXXKiq","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytr_Ugzg-8QVQkDV9OvGNDR4AaABAg.AIiJs0iRzNfAIrJq_3xSIm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytr_UgzoNOaDrAWgxfgctcR4AaABAg.AIbnZYX8zL_AIcm2w1AR1h","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytr_UgwLhN99S14cgh93ynp4AaABAg.AIBeJ71mOrEAI_k3e05l6Z","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytr_UgzxFR5FqNQxXPO_V9R4AaABAg.AHvAISAHSFCAIMSrzl3aEX","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgzmmMmKYZ_faip9Qxl4AaABAg.ALbxbfOH64uALtPeySX3ny","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytr_Ugwl1IAn9xg_kSgFgVl4AaABAg.AK4p0QdnZboAK5pzBY2m87","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytr_UgyeZtZEuxRf0MG1V5F4AaABAg.AK3hCL06dvaAK3hQKPaB21","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgxkCixAcWiY6HyFSuV4AaABAg.AK1oz5k3DLMAK2D-HTDOHL","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})