Here's a further attempt at the tricky task of defining computation spurred on by @peligrietzer. Let's begin with the relation between computation and information processing. All computation is information processing, but not all information processing is computation.
The problem is that everything described as 'effective computation' where what this means is indexed to the equivalence class of computable functions picked out by recursive functions, lambda calculus, and Turing machines, is too narrow to capture everything computational.
This is Abramsky's point (arxiv.org/abs/1604.02603). Even something as seemingly mundane as an operating system is not really computing a function from finite input to finite output. It's a well-behaved non-terminating process.
This notion of non-termination appears in the context of Turing machines and lambda calculus, but that doesn't mean that these formalisms effectively capture, let alone exhaust the meaning of the term 'computational process' that can be qualified by 'non-terminating'.
To make progress we need to recognise that the inherently *compositional* character of computational processes is inadequately captured by lambda calculus/TMs. We can build computational processes out of other computational processes, and abstraction is a tool for doing so.
Understanding *what* a computational process computes lets us abstract away from *how* it does so, such that we can use it as a black box component in building a more complicated computational process. This applies to both language (λ) and machine (TM) perspectives.
The natural question is then: can we build computational processes that cannot simply be described as computing functions from finite inputs to finite outputs purely out of computational processes that can be so described?
Furthermore, would such a process be accurately described as an algorithm, even though it was non-terminating?
It seems to me that the answer to both questions is *obviously* yes. What else is an the operating system? The main problem is that we don't have an equivalence result governing frameworks for composing processes comparable to that indexed by the Church-Turing thesis.
We have are a bunch of different term rewriting calculi (comparable to λ: e.g., π Calculus), machine models (comparable to TMs: e.g., Hewitt's Actor Model), and process algebra (e.g., Milner's Calculus of Communicating Systems), but no overriding framework that unites them.
My personal view is that the one true overarching framework would essentially be a fully realised cybernetics, with a unified theory of control and semantic information (deontologistics.wordpress.com/2019/11/01/tfe…). Yet, I think we can still make some imprecise but correct claims in its absence.
Here's an imprecise recursive definition: any process that processes information, and is built entirely out of other computational processes, for some satisfactory definition of 'build', is itself a computational process, even if it doesn't compute a 'computable' function.
The question is then whether we can relax the constraint that a process must be 'entirely' made out of other computational processes in order to qualify as computational. Could it incorporate non-computational information processing subsystems and still be computational?
This brings us back to cybernetic and the theory of control systems. There are plenty of working 'hybrid systems' in which an underlying dynamic process is yoked to a control process that is computational in the strict sense defined by the above recursive definition.
Are these systems, considered as a whole, computational processes or not? Consider a robot that navigates an environment and carries out certain non-computational tasks by applying algorithmic logic to convert sensory information into control signals driving its actuators.
Insofar as we can abstract the functional roles played by its sensors and actuators from the details of their implementation in the same manner we can regular computational subsystems, I see no reason why we can't call this an 'interactive' computational process.
We can already build computational processes out interacting computational processes by composing them into concurrent communicating systems. The only difference with the robot is that it engages in some 'non-communicative' interaction with its environment.
What we have here is a condition under which we can relax the term 'entirely' in the above recursive definition. Some types of functional components that involve *taking* information from and *feeding* information into an environment can be used to build computational processes.
There is still much more to be said. We might want to understand how *feedback loops* created by these forms of input from and output to a non-computational environment enable us to talk about *what* a computational process is computing, or *what* information it is processing.
Crucially, we might want a full blooded theory of semantic information which allowed us to say something like 'the robot is computing a solution to the problem of navigating *this* obstacle', or the 'the AI is mulling over information it has received *about* the stock market'.
I'm not claiming to have such a theory, but I do think you can begin to see how we might get to such a theory by making the imprecise recursive definition of what a computational process is that I've provided by expanding it in the ways I've suggested and making it precise.
So, that's where I currently stand on the fundamental question in the philosophy of computer science. Norbert Wiener did nothing wrong.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Why must my attempt to understand and enhance the constitutive conditions of my own freedom be interpreted as *complicity* with those who attempt to understand, manipulate, and thereby diminish the freedom of others? Why can't it be solidarity? Seriously?
I have this same argument over and over and over again. My commitment to understand and enhance freedom (Prometheanism) is thrown back in my face, like I'm a collaborator preparing the populace for the computational panoptican being assembled around them.
I apologise for taking the quote out of context, but no matter where it begins, the argument always seems to arrive at some variant of Lorde's claim that "the master's tools will never dismantle the master's house."
Excellent thread that lines up with some observations I’ve made on here recently. What’s interesting is that it’s possible to find an academic niche where all you really do is express these legitimation/delegitimation narratives, to varying degrees of explicitness.
This is basically what's responsible for the proliferation of terms like 'post-structuralism' in the humanities, which is a very loose and thematically suspect label not avowed by any of the figures it is supposed to group.
However, there's a niche to be filled articulating the narrative that compresses the messy history into a set of methodological ideals that might organise a research project in some humanities (or adjacent) discipline.
Okay, I promised a quick introduction to the history of the terms 'metaphysics' and 'ontology', so I'll try to provide it in as concise a way as possible. However, this will involve going all the way back to the Presocratics, so you've been warned in advance.
Let's start with Being, which means actually starting before Being, oddly enough. Beginning with Thales, the Ionian physiologoi searched for an arche, or fundamental principle that would let them understand the dynamics of nature. What is conserved across change: water, air, etc.
There are a bunch of abstract distinctions that emerge at this point, and get related in a variety of ways: persistence/change, unity/multiplicity, reality/appearance, etc. These are interesting in the Ionians, but it's Heraclitus and Parmenides that really synthesise them.
To synthesis some of what I've been saying about critique with @Aelkus's comments about expertise, and some of my earlier griping about Anglophone Continental philosophy, the problem is that 'critique' can be a way of perpetually suspending a debate one doesn't want to have.
This is an important point for the epistemology of ignorance, wherein we recognise that ignorance is not the absence of knowledge, but a positive inability/unwillingness to learn things one does not wish to learn, sustained by unconscious biases and conscious techniques alike.
Far too much 'critique' consists in using techniques that generate discursive equipollence (an equity between P and not P) for the purpose of forwarding the argument by other means, with no intention of forwarding anything. Equipollence is no longer a means, but an end in itself.
Since I've just done a deep dive into CS on my timeline, it might help if I frame a question that I think you need to appreciate all the relevant distinctions I just made to properly understand: what type of computational process is a mind?
There are many complaints made about classical computational theory of mind, but few of them come from the side of computer science. However, in my view, the biggest problem with first and second generation computationalists is a too narrow view of what computation is.
Consider this old chestnut: "Godel shows us that the mind cannot be a computer, because we can intuit mathematical truths that cannot be deduced from a given set of axioms!"
@meier_kreis@eripsa@texturaaberta I can’t say I’ve read both of these through, but they’re good reference texts with exercises and such if that’s your thing. The first has an intro to set theory and meta logic toward the end, the second builds up from recursive function and Turing machines to Godel’s proofs.
@meier_kreis@eripsa@texturaaberta To be quite honest, most of my maths knowledge comes from spending too much time on nLab, which means I’ve got a much better grip on high level relations between fields and concepts than on practical techniques for proving things. Still, this can be philosophically useful.
@meier_kreis@eripsa@texturaaberta Beyond this, ArXiv is a veritable treasure trove of papers on maths and computer science. In fact, there are a lot of great papers (and even courses) that can be found free online with a quick google. The academic norms about such things are so much better.