Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Data on its own is not AI. Data is foundational, but AI become AI by interpreting and formulating relationships among all of that data, in ways that we don't understand, and may not be able to understand. We might say that this is Eliezer's first principal. Where Dr. Pope goes on to speak about labels, again there's no evidence that things like human readable labels should emerge from billions of complex matrices, let-alone their connections or inter operations. Maybe he means checking the activations of particular, individual neurons under supervision in mechanistic interpretability exercises? It's possible that matrices encode whether a given piece of data is also good poetry... but labelling every matrix, let-alone those that are called on at inference in aggregate, with labels for all possible purposes does not seem feasible. It's good to hear that the bar to pass is not 100% eradication of life. That grants whatever remains some leeway. I think Dr. Pope's concept of an ASI is what I (and largely Google) would call a hyper-computer. That would require taking AI in a very strict and narrow path of development though. Some organisations will of course do this. I'd point to Google's Alpha- models as examples of hyper-computers. Google's concept and patented version is specifically cloud based. The point is that they focus on hard computation almost exclusively, but there's no guarantee that path won't require general intelligence, nor that anyone can make a specifically narrow intelligence going forward (which imho sinks many other arguments that propose that narrow AI's will be safe). I would push back on the premise that coding will be economically important in 5 years. That ship is unequivocally preparing to set to sail. On Dr. Pope's objection to the space analogy, where he notes that civilisation granted humans our greater cognition - that itself is just another example of the phase change.
youtube 2026-03-25T03:2… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgykPoMaG1lBoaOzy1Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-mHG0u2Box6nvbAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyQshzFC1f8cLYNMTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw1jVqu1wjx_0KuEAZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYbhXniNeK_3d-AQ14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy3QSfQZ1tN5L-AxMV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw2jUSkLvKvatskVGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyw9bUjZmtlv6hgayl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzGnopmwz3uLfbeTgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzCwIC6hdw8Y-bOB2d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]