@meier_kreis@eripsa@texturaaberta I can’t say I’ve read both of these through, but they’re good reference texts with exercises and such if that’s your thing. The first has an intro to set theory and meta logic toward the end, the second builds up from recursive function and Turing machines to Godel’s proofs.
@meier_kreis@eripsa@texturaaberta To be quite honest, most of my maths knowledge comes from spending too much time on nLab, which means I’ve got a much better grip on high level relations between fields and concepts than on practical techniques for proving things. Still, this can be philosophically useful.
@meier_kreis@eripsa@texturaaberta Beyond this, ArXiv is a veritable treasure trove of papers on maths and computer science. In fact, there are a lot of great papers (and even courses) that can be found free online with a quick google. The academic norms about such things are so much better.
@meier_kreis@eripsa@texturaaberta The key thing to get one’s head around is the equivalence between different models of computation, and the more general fields/systems they’re situated in, principally: recursive functions, Turing machines (automata theory), and lambda calculus (term rewriting systems).
@meier_kreis@eripsa@texturaaberta This gives you the basic formal and conceptual purchase on the two ways we usually think about computation, namely, in terms of machines, and in terms of languages. Exemplars of these perspectives in CS are Scott Aaronson and Robert Harper.
@meier_kreis@eripsa@texturaaberta If you want to understand the Curry-Howard(-Lambek) correspondence and how that forms the basis for things like HoTT, then Harper’s lectures are great. If you want to understand the relation between computability theory and computational complexity theory, Aaronson is your guy.
@meier_kreis@eripsa@texturaaberta The other person I will mention here is Samson Abramsky, who is a complete all-rounder and philosophically deep thinker on the question of what computation (and information) is, and the challenges the field still faces: arxiv.org/abs/1604.02603
@meier_kreis@eripsa@texturaaberta His distinction between computations in which *what* is computed is important (i.e., batch programs) and those in which *how* they behave is important (i.e., online systems) is extremely important, and often not sufficiently appreciated by people who apply CS in other fields.
@meier_kreis@eripsa@texturaaberta That should also give you a taste of the difficulties posed by concurrent computation, which is a topic with some deep philosophical subtleties and no unitary formal description. It's often not explained well, nor is the difference between it and parallel computation.
@meier_kreis@eripsa@texturaaberta One way to get a feel for it is to look at the dominant machine model of concurrent computation(analogous to TMs): Hewitt's actor model (en.wikipedia.org/wiki/Actor_mod…). Beware, Hewitt can get a little cranky around the edges, but there's something philosophically significant in there.
@meier_kreis@eripsa@texturaaberta In particular, this will force you to get a grip on the difference between stateless and essentially stateful computation. The former is principally the province of languages and the latter the province of machines.
@meier_kreis@eripsa@texturaaberta It also might introduce you to the idea of non-deterministic computation, which is another significant distinction best thought about from the machine side, i.e., automata theory.
@meier_kreis@eripsa@texturaaberta Coming at the topic from logic, you'll no doubt be much more inclined towards the language perspective, because it's there that logic becomes useful in articulating type theories that guarantee programs will behave well if they are written well. The domain of CH correspondences.
@meier_kreis@eripsa@texturaaberta Crucially, these involve a range of term-rewriting systems beyond typed lambda calculi. I personally hate these things, as I find the syntax a nightmare to read, but you can get a grip on the concepts without mastering the methods.
@meier_kreis@eripsa@texturaaberta However, there are a range of connections between logics and machines that get exploited in formal verification. The Abramsky paper talks about some (such as dynamic logic), but this paper also gives a great historical overview of the field: homepages.inf.ed.ac.uk/wadler/papers/…
@meier_kreis@eripsa@texturaaberta I think that's everything for now. Maybe a little overkill, but there'll be something in there you find useful. The other thing to do here is do some programming, if you don't already. This is one step I've struggled to take myself, beyond playing with a bit of Haskell.
Since I've just done a deep dive into CS on my timeline, it might help if I frame a question that I think you need to appreciate all the relevant distinctions I just made to properly understand: what type of computational process is a mind?
There are many complaints made about classical computational theory of mind, but few of them come from the side of computer science. However, in my view, the biggest problem with first and second generation computationalists is a too narrow view of what computation is.
Consider this old chestnut: "Godel shows us that the mind cannot be a computer, because we can intuit mathematical truths that cannot be deduced from a given set of axioms!"
I was quite pleased with this as a brief summary what I take to be the most counterproductive arguments made on the political left. However, it might be worth elaborating on them a bit, so a new thread is needed.
What these arguments have in common is that they're quick and easy discursive tactics which foreclose much better discursive strategies. They are most often used unthinkingly, but there are theoretical positions that transform such *local* tactics into *global* strategies.
Let's begin with the tactic of *naturalisation*. I've explained the problems I have with normative naturalism as a general position elsewhere (deontologistics.wordpress.com/2019/10/06/tfe…), but it's worth analysing the trap involved in even implying some form of it by accident on the local scale.
I increasingly think the Turing test can be mapped onto Hegel’s dialectic of mutual recognition. The tricky thing is to disarticulate the dimensions of theoretical competence and practical autonomy that are most often collapsed in AI discourse.
General intelligence may be a condition for personhood, but it is not co-extensive with it. It only appears to be because a) theoretical intelligence is usually indexed to practical problem solving capacity, and b) selfhood is usually reduced to some default drive for survival.
Restricting ourselves to theoretical competence for now, the Turing test gives us some schema for specific forms of competence (e.g., ability to deploy expert terminology or answer domain specific questions), but it also gives us purchase on a more general form of competence.
I increasingly think that Mark Fisher’s perspective on the politics of mental health can be expanded to the politics of health more generally. It is not simply that social causes of illness are individualised, but that one can be anything but an individual in medical contexts.
The NHS is great at treatment, and in some respects great at rapid diagnosis and response (cf. NHS 111), but the diagnostic system more generally is *completely* fucked, and fucked in ways that disproportionally affect both marginal groups and weird individuals.
Here's one thing I have seen: a friend who was symptomatic for over a year was only diagnosed with cancer when his lymphoma reached stage 4, at which point he had a tumour between his vertebrae and his neck was distended; and only then because my brother suggested it to the GP.
Here's a few more thoughts on economics, following up from last night's monster thread on money, infrastructure, and the going price of power.
Allow me to explain my take on Marx in a little bit more detail, in order to articulate the way he binds the descriptive (explanatory/predictive) and normative (ethical/political) elements of his theory, the consequences this has, and what's good/bad in this from my perspective.
Disclaimer: I am still in the process of really getting to grips with Marx and the tradition built around him. I am by no means ignorant, but I do not devote myself to the study of it as if it were rabbinical law or hermetic lore. As such, I will only attend to certain responses.
Okay, here's a slightly different twitter thread from the usual. I want to say something concrete about the way in which capital works as an incentive structure that's supposed to co-ordinate human behaviour to solve resource allocation problems.
I think that the left has a fraught relationship with the concept of money. The two opposing poles that configure this relationship are something like: i) money is the root of all evil, and must be abolished; and ii) money is a neutral and homogeneous medium, and can be ignored.
Of course, there are a lot of intermediate positions here, but they tend to be arrayed along a line between the poles, rather than rejecting the presuppositions underpinning the polarity itself.