Okay, I promised a quick introduction to the history of the terms 'metaphysics' and 'ontology', so I'll try to provide it in as concise a way as possible. However, this will involve going all the way back to the Presocratics, so you've been warned in advance.
Let's start with Being, which means actually starting before Being, oddly enough. Beginning with Thales, the Ionian physiologoi searched for an arche, or fundamental principle that would let them understand the dynamics of nature. What is conserved across change: water, air, etc.
There are a bunch of abstract distinctions that emerge at this point, and get related in a variety of ways: persistence/change, unity/multiplicity, reality/appearance, etc. These are interesting in the Ionians, but it's Heraclitus and Parmenides that really synthesise them.
So, Heraclitus gives us the Logos, which is a principle that is not conceived of as a substrate. This separation is required because his preferred element, fire, is pure flux, and yet he wants to say that this flux is itself constant, i.e., that it can never settle.
Parmenides, on the other hand, gives us Being, which is a substrate that is not conceived of as a principle, i.e., as anything that could *explain* persistence across change. It is nothing but persistent, unitary, reality. Truth is one, and everything many is mere appearance.
There's a nice story to be told about how Plato unites Heraclitus and Parmenides by integrating them into his vision of the intelligible and sensible worlds (articulating a distinction between Being and Becoming). I'm not going to dwell on this too much though.
The key point is that Being as an abstract concept, and its relation to thought/thinking (Logos), was initially a way of trying to capture what was at stake in the project of the Ionians, i.e., to achieve a certain sort of methodological self-consciousness.
There is not yet a distinction between physics and metaphysics, and this distinction will not even be named until after Plato and Aristotle have begun to formalise distinctions between different fields, but the process of articulating such self-consciousness has already begun.
And in case it isn't obvious, every one of these abstract concepts has been borrowed from ordinary language and gradually beaten into shape over the course of these early intellectual debates. There are plenty of conflicts and inconsistencies only recognisable in retrospect.
This is most obvious in the case of 'substance' or 'ousia', whose polyvalent potential keeps it mutating well past the time of Plato and Aristotle, into the conceptual contestations that define early modern rationalism and beyond. This creates competing retrospective narratives.
I'll return to these narratives regarding substance later, as they're a key feature of the meaning given to 'metaphysics' in the post-Heideggerian tradition. For now I want to talk about Plato and Aristotle, and the struggle for methodological self-consciousness.
Firstly, they provide a contrast in how the concept of substance can be configured. For Plato, true substance is universal (the unchanging Ideas), while for Aristotle it's individual (that in which change subsists). What is most real? Eternal existence or active persistence?
Secondly, they provide a useful contrast in the way in which thought (Logos) gets internally articulated, and on that basis the way in which other disciplines gain their independence from philosophy by leveraging these distinctions. Again, self-consciousness emerges gradually.
Plato's Academy gives birth to the tripartite division between Logic, Physics, and Ethics, while Aristotle's Lyceum begets a twofold division between Theoretical (physics, mathematics, theology) and Practical sciences (ethics, economics, politics). These compete for a long time.
NB: there'ss another side to the story of Presocratic philosophy, leading through the Sophists to Socrates, in which foundational practical concerns emerge and are worked out. This strand concerns itself with Nomos, rather than Physis, but it overlaps in its concern with Logos.
Okay, that's the scene set. We now need to return to Aristotle, who was by far the most systematic thinker that the tradition had so far produced, insofar as he tried to unite and articulate every 'philosophical' concern that preceded him. He's the father of metaphysics as such.
Let's dispense with some apocrypha. The works collected in Aristotle's 'Metaphysics' were not given this title by him, but they weren't named this by accident. The term emerged as a way to articulate the methodological character of these reflections in relation to the Physics.
However, the sheer level of abstraction involved in grappling with these methdological questions produces a tortuous, twisty, and terminological dicey set of reflections on Aristotle's part, whose unity no one can really explain, or even really consider, until much later.
Aristotle's own term for this task is 'first philosophy' (prote philosophia). Under this heading, he makes very explicit that he is trying to clarify the nature of the task that the Ionians and their successors had set themselves, a task which exceeds the bounds of his Physics.
Nevertheless, he introduces two ways of describing the subject matter of first philosophy that are not obviously identical: the study of being qua being (what will become 'ontology'), and the study of the divine first cause (what he calls 'theology'). The confusion starts here.
Everyone is confused, but they're generally more awed by Aristotle's sheer capacity for methodological abstraction. It really isn't until Platonic and Aristotelian philosophy become the basis for systematic theology in the Abrahamic tradition that these issues get addressed.
The notion of Being gets discussed plenty in the meantime, entering into the Christian tradition through the influence that the NeoPlatonists exert on debates in the early church, culminating in the work of Augustine. There is plenty methodological self-consciousness here too.
In particular, in order to codify the arguments Augustine makes regarding the hierarchy of Being and the equivalence between Being, Goodness, Truth (and Beauty qua harmony), the idea of universal concepts (or 'transcendentals') is introduced (i.e., unum, bonum, verum, etc.).
Aristotle's influence on the Christian theological tradition comes via the Islamic theological tradition, and it is these Islamic thinkers who first raise methodological questions about the unity of Aristotle's notion of first philosophy.
Beginning with Al Farabi, and proceeding through Ibn Sina (Avicenna) and Ibn Rušd (Averroes), the question of what the study of 'being qua being' is and how it is related to theology, becomes explicitly thematised in a way that will define everything that follows in 'the West'.
These thinkers are very subtle, and produce a number of other contributions not simply to Aristotle exegesis, but to philosophy more generally. If nothing else, Ibn Sina's distinction between essence and existence untangles the knotted aspects of Aristotelian primary substance.
This is in stark contrast to the deference to Aristotle that solidifies in Christian theology after Aquinas supplants Augustine as its most influential thinker, presenting his own work as mostly a correct and coherent reading of Aristotle's ideas. He becomes 'The Philosopher'.
This has a very peculiar effect on the development of methodological self-consciousness within Christian theology, which is ultimately responsible for the weird terminological history of 'ontology' and 'metaphysics' in the philosophical tradition which follows it.
Firstly, the newly dominant Thomism inherits the framework of transcendentals used for discussing Being (qua most universal concept) from its Augustinian predecessors. This leads to complex debates about whether 'Being' is equivocal and the resulting innovations of Duns Scotus.
Secondly, Suarez writes the most extensive and significant commentary on Aristotle's metaphysics since Aquinas, continuing the tradition of framing complex methodological innovations as simple Aristotle exegesis. This philosophical humility disguises his sheer significance.
Suarez carefully separates the two sides of first philosophy conflated by Aristotle, and in so doing builds the foundation on which the subsequent tradition will construct its own methodological reflections. However, out of humility, he refuses to name these two sides.
It's the Reformation (an Augustinian counter-revolution), that enables Calvinist theologians suddenly obsessed with naming everything to establish the terminology that will become historically decisive: Lorhard and Göckel coin the term 'ontology' for the study of being qua being.
This isn't exactly the terminology that wins out in the mid term term. It is subsumed in the much more popular distinction between metaphysica generalis (ontology) and metaphysica specialis (theology, psychology, cosmology) is much more popular. Aristotle is set aright.
This distinction works its way through Wolffian circles to Baumgarten and thereby to that hero of methdological self-consciousness, Immanuel Kant, who uses it to articulate the difference between his Transcendental Analytic (generalis) and Transcendental Dialectic (specialis).
This is the substance of Heidegger's claim, contra the NeoKantians, that Kant was concerned with ontology rather than epistemology. For it was ontology alone amongst the traditional metaphysical subdisciplines to which he permitted (synthetic a priori) knowledge.
I can tell you a much more complicated story about Kant's influence and how it leads to the dynamic of rejection and return to metaphysics that takes place independently in both Analytic and Continental traditions. But, that story is told in my book (urbanomic.com/book/object-or…)
From this point on, I'll truncate my story quite heavily. The one thing I can't skip is Heidegger's critique of Aristotle and his account of the 'forgetting of Being' that follows from it. This helps explain the terminological variance within and between the two traditions.
Given what I've said so far, you might find it strange for Heidegger to claim that Being had been forgotten, given all the talk about it that demonstrably took place in the years between Aristotle's death and Heidegger's publication of Being and Time. But Heidegger has a point.
His point is that there is a fundamental obstacle to methodological self-consciousness that begins with Aristotle's conflation of ontology and theology. This is what he calls 'onto-theology', and it is demonstrably a real issue.
His point is not that ontology is contaminated by theological assumptions, but rather that question concerning the relation between the proper subject matter of metaphysica generalis and metaphysica specialis is given an ad hoc answer that the tradition fails to challenge.
This is to say that the two sides of the question of Being - What are beings *as such*? (essence) & What are beings *as a whole*? (existence) - have been tied together by theology, such that neither can be understood without appeal to a special type of being, namely, the divine.
The underlying sin of onto-theology is not that we can only understand being qua being in terms of God's role as creator, but that we can only understand the Being of all things in terms of a *specific* being. This circularity violates what he calls 'the ontological difference'.
A related criticism that Heidegger makes is that the *methodological* sense given to the term 'metaphysics' became *substantive* in the Christian tradition: it became the study of that which is not physical (the immaterial), and thereby simply synonymous with theology.
Whether or not Heidegger is right to claim that every philosopher prior to himself succumbed to the temptation of onto-theology is debatable. If nothing else, Kant was a paragon of self-consciousness in this regard, trying to properly delimit the scope of metaphysics.
Let's take stock. There's two sides to traditional 'metaphysics': generalis (beings qua beings) and specialis (beings as a whole). The first of these is also named 'ontology', and so ontology is a subdiscipline of metaphysics.
Why isn't it just this simple? Why can't we just say 'great, ontology is a part of metaphysics!' and stop there? Because the terminology gets twisted into different shapes by the vicissitudes of the rift between Analytic and Continental philosophy in the 20th century.
It all comes down to the constraints under which Heidegger wrote Being and Time, and the way its reception shifted philosophical terminology. It's important to ask, why did Heidegger abandon the term 'metaphysics' and choose to use 'ontology' to articulate his concerns?
The truth is that he didn't. This is the best kept secret in the post-Heideggerian tradition. If you read his work from the period in and around Being and Time, he *explicitly* describes himself as doing metaphysics, if not onto-theological metaphysics.
Cf. 'What is Metaphysics?', Fundamental Concepts of Metaphysics, Introduction to Metaphysics, etc.
There's two things that complicate this. First, he eventually came to the conclusion that all metaphysics is onto-theological, and on that basis that it is strictly impossible to correct the central failures of the metaphysical tradition. But this is around a decade after B&T.
Second, he deliberately did not use the term metaphysics to describe the project of Being and Time. Why? Because he chose to emphasise the continuity between his project (which he termed 'fundamental ontology') with Husserl's (which included 'formal ontology').
There's a lot that can be said about the similarities and differences here, especially with regard to their relation to 'regional ontology', which inherits the role of metaphysica specialis to the (formal/fundamental) metaphysica generalis.
Both Heidegger and Husserl see their task as in some sense methodologically securing the constraints under which one can describe the regional ontologies of specific disciplines, such as mathematics, physics, biology, and history. They simply disagree about what this means.
It's worth contrasting this with the use of 'ontology' that dominates the Analytic tradition following Quine, which takes over the role of metaphysica specialis in the sense of describing *which* regions of beings there are. This is eventually supplemented by 'meta-ontology'.
This is almost completely disconnected from the traditional use of the term and is responsible for a huge amount of confusion, especially considering that Quine also insisted that 'ontology' was the only feasible bit of 'metaphysics', which should otherwise be dispensed with.
Add in the fact that in computer science and related disciplines 'ontology' is generally used in a manner more similar to Husserl's usage (formal/regional), which avoids any reference to 'metaphysics' (much as Husserl did), and you start to grasp the terminological clusterfuck.
The final rotten cherry which tops this mess is the pejorative use that 'metaphysics' acquired in the Continental tradition following Heidegger's critique of the tradition and his (temporary) adoption of Husserl's terminology. This pejorative usage is *exasperatingly* diverse.
One can attend talks where 'metaphysics' is denounced and all present nod their heads in agreement, without ever really establishing *which* features of classical metaphysics are problematic, and even whether they're the ones Heidegger attacked.
This brings us back to the concept of substance we discussed right at the beginning of the thread. The final aspect of Heidegger's critique of the tradition is that the deeper reason for onto-theology is the implicit conception of Being as substance found in Plato and Aristotle.
But what does he mean by substance here? Which elements of that polyvalent concept are emphasised by his critique? For Heidegger, it's the interpretation of substance as constant presence (eternity/persistence), that colours the whole understanding of Being from Plato to himself.
This is a distinctly temporal interpretation of the problem, and it makes a lot of sense if you read Augustine, and understand that the unpublished second part of B&T was supposed to contain a 'destruction' of the assumptions about time which he and other such figures expressed.
However, if one goes to Derrida, one finds his critique of the 'metaphysics of presence' (a truly ghastly phrase) articulate in terms of intentional presence, rather than temporal experience. His criticism is framed more as a response to Husserl than the theological tradition.
One can trawl through contemporary Continental philosophy and find a plethora of purported original sins committed in the name of 'metaphysics'. Badiou objects to substance as unity. Meillassoux to substance as ground. Others articulate new and misunderstand old variations.
And what's the upshot? Students across the whole length and breadth of the disciplinary spectrum who have to learn how to differentiate 'epistemology' from 'ontology' when there is literally no consistent usage in the halls of academia, but many overlapping and inconsistent ones.
This is made even worse by the fact that what passes for 'ontology' in many places influenced by phenomenology looks less like a subdiscipline of metaphysics than it does a subdiscipline of epistemology concerned with organising the referents of difference disciplines.
This is precisely what we see in the way it is used in computer science, in which it is a feature of the field known as 'knowledge representation'. This is still fairly Husserlian, but not even remotely Heideggerian. It is in fact fairly close to the project of the NeoKantians.
And one can count Carnap as a NeoKantian/Husserlian for the purposes of this exercise.
And finally, all of this is made even worse by the return of 'metaphysics' as a respectable project that occurred in the Analytic tradition post-Kripke and in the Continental tradition around the time of Deleuze.
This makes 'ontology' a sort of lowest common denominator that can be used in each tradition to describe certain shared concerns regardless of whether one is more generally pro or anti metaphysics. But *even then* it still means something different between traditions.
In summary, it's a complete bloody mess. I personally try to use the terms in a way that lines up with the metaphysica generalis/specialis distinction, but I really don't blame anyone who doesn't. I also think that the computer science usage is admirably clear and consistent.
And that's what I learned by accidentally doing a PhD on Heidegger. Hope you got something from it.
I certainly lied to you (and myself) about it being quick.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
To synthesis some of what I've been saying about critique with @Aelkus's comments about expertise, and some of my earlier griping about Anglophone Continental philosophy, the problem is that 'critique' can be a way of perpetually suspending a debate one doesn't want to have.
This is an important point for the epistemology of ignorance, wherein we recognise that ignorance is not the absence of knowledge, but a positive inability/unwillingness to learn things one does not wish to learn, sustained by unconscious biases and conscious techniques alike.
Far too much 'critique' consists in using techniques that generate discursive equipollence (an equity between P and not P) for the purpose of forwarding the argument by other means, with no intention of forwarding anything. Equipollence is no longer a means, but an end in itself.
Since I've just done a deep dive into CS on my timeline, it might help if I frame a question that I think you need to appreciate all the relevant distinctions I just made to properly understand: what type of computational process is a mind?
There are many complaints made about classical computational theory of mind, but few of them come from the side of computer science. However, in my view, the biggest problem with first and second generation computationalists is a too narrow view of what computation is.
Consider this old chestnut: "Godel shows us that the mind cannot be a computer, because we can intuit mathematical truths that cannot be deduced from a given set of axioms!"
@meier_kreis@eripsa@texturaaberta I can’t say I’ve read both of these through, but they’re good reference texts with exercises and such if that’s your thing. The first has an intro to set theory and meta logic toward the end, the second builds up from recursive function and Turing machines to Godel’s proofs.
@meier_kreis@eripsa@texturaaberta To be quite honest, most of my maths knowledge comes from spending too much time on nLab, which means I’ve got a much better grip on high level relations between fields and concepts than on practical techniques for proving things. Still, this can be philosophically useful.
@meier_kreis@eripsa@texturaaberta Beyond this, ArXiv is a veritable treasure trove of papers on maths and computer science. In fact, there are a lot of great papers (and even courses) that can be found free online with a quick google. The academic norms about such things are so much better.
I was quite pleased with this as a brief summary what I take to be the most counterproductive arguments made on the political left. However, it might be worth elaborating on them a bit, so a new thread is needed.
What these arguments have in common is that they're quick and easy discursive tactics which foreclose much better discursive strategies. They are most often used unthinkingly, but there are theoretical positions that transform such *local* tactics into *global* strategies.
Let's begin with the tactic of *naturalisation*. I've explained the problems I have with normative naturalism as a general position elsewhere (deontologistics.wordpress.com/2019/10/06/tfe…), but it's worth analysing the trap involved in even implying some form of it by accident on the local scale.
I increasingly think the Turing test can be mapped onto Hegel’s dialectic of mutual recognition. The tricky thing is to disarticulate the dimensions of theoretical competence and practical autonomy that are most often collapsed in AI discourse.
General intelligence may be a condition for personhood, but it is not co-extensive with it. It only appears to be because a) theoretical intelligence is usually indexed to practical problem solving capacity, and b) selfhood is usually reduced to some default drive for survival.
Restricting ourselves to theoretical competence for now, the Turing test gives us some schema for specific forms of competence (e.g., ability to deploy expert terminology or answer domain specific questions), but it also gives us purchase on a more general form of competence.
I increasingly think that Mark Fisher’s perspective on the politics of mental health can be expanded to the politics of health more generally. It is not simply that social causes of illness are individualised, but that one can be anything but an individual in medical contexts.
The NHS is great at treatment, and in some respects great at rapid diagnosis and response (cf. NHS 111), but the diagnostic system more generally is *completely* fucked, and fucked in ways that disproportionally affect both marginal groups and weird individuals.
Here's one thing I have seen: a friend who was symptomatic for over a year was only diagnosed with cancer when his lymphoma reached stage 4, at which point he had a tumour between his vertebrae and his neck was distended; and only then because my brother suggested it to the GP.