As with so many things in programming, it’s a general misconception and misunderstanding. Programmers are, after all, human and humans are vulnerable to false beliefs and hype and misinformation.
OOP and FP are not actually opposites. They’re merely two of dozens of various programming paradigms. It would be like suggesting that OOP and declarative/logic programming are opposites, or that FP and event-driven programming are opposites, or that data-driven programming and parallel computing are opposites. Pure nonsense.
OOP and FP can actually coexist in the same language. Scala is a key example. Both paradigms can be used to write the same program and they are, in fact, complementary.
Moreover, the definitions of OOP and FP are somewhat fluid. Depending on your flexibility, they can coexist in many other languages: all it would take is to support encapsulation (of data and associated functions) and lambdas (first-class functions and closures). You don’t even need immutability. Hence, languages like Smalltalk, Python and Ruby would be good examples. (Python actually violates one of the basic tenets of OOP: all of its instance variables are always public.)
One answer may be found in Category Theory (CT) the mathematics that one could say is most explicitly used by functional programmers. (See e.g. Category Theory for Programmers: The Preface ). A short and very rough answer is: mathematically OO and FP are close to being in opposite categories, which does not mean they can’t be complementary.
I came to CT via a pragmatic route that passes through Java to Scala and RDF (the semantic web language). Finding CT to be actually useful, I found myself wondering more and more how these concepts fit together. [1]
One very interesting discovery I made over the past year of research is that even though CT is used by functional programmers in Haskell all over the place, Prof Bart Jacobs developed elegant CT theories to explain OO programming in the 1990ies. His articles are very clearly written. https://scholar.google.co.uk/cit... and accessible to programmers as it describes something we all know: OO programming (instead of describing complex topological spaces as other CT books often do).
It turns out that OO programming is coalgebraic and that may already give a clue as to why OO can be seen to be the opposite or perhaps duals of functional programming: Functional programming tends to be more algebraic, (see the 1997 book “The Algebra of Programming” http://themattchan.com/docs/algp... or the meetup: Why do Functional Programmers always talk about Algebra(s)? - Adam Rosien) This is a rough picture of the situation, but it is correct enough to be enlightening. This type of difference would explain in a clean mathematical way what people mean when they speak of different paradigms here.
Here is a key article:
Jacobs, Bart. "Objects and classes, co-algebraically." Object orientation with parallelism and persistence. Springer, Boston, MA, 1996. 83-103.
https://pdfs.semanticscholar.org...
So what are coalgebras? It turns out these give us the mathematics of processes, streams (infinite ones), and a dual of induction (a.k.a recursion) called co-induction. And also the mathematics of OO programming!
The key concept that one starts discovering when dealing with functional programs are monads. They are everywhere. They give pure functions a notion of context that can be used for IO, dealing with time, and all the changeability of a real system. Perhaps one can think of functional programming as monadic programming. In that case a relation between functional and OO is given very neatly by this article:
Jacobs, Bart, and Erik Poll. "Coalgebras and monads in the semantics of Java." Theoretical Computer Science 291.3 (2003): 329-349.
https://pdfs.semanticscholar.org...
This shows that you can look at an OO VM which is a state machine as a coalgebra and that this is mathematically equivalent to looking at it as a state monad! You get from one to the other via a simple algebraic transformation. (Indeed category theory allows one to start thinking of types the way in high school we thought of algebras: very very similar rules apply! This is nicely explained in the 2017 Scala eXchange Keynote: The Maths Behind Types )
Furthermore where algebras rest on a logic of equations, coalgebras are tied to modal logic: One needs to think in terms of different possible future states of a machine, not all of which can be actualized simultaneously. (see Specifying coalgebras with modal logic )
Indeed there is a duality between coalgebras and boolean logic with operators (Kripke style modal logic) described here: Towards a Duality Result in Coalgebraic Modal Logic
This would explain how Alan Kay ended up with a very modal logic looking project such as Worlds http://www.vpri.org/pdf/rn200800... which actually seems to crank the button one more time giving OO programmers a way to work with states across worlds.
Now less widely known I think even in the functional community is that even monads were found to be a type of modal operator as early as 1997 in this more difficult to read Monad as modality. This would explain why they are so useful at dealing with contexts (such as time or IO).
So the answer to the question is that there are deep mathematical reasons that explain the relation between FP and OO programming, that actually do show them to be if not quite duals to each other, then very close to being so.
The relation both of them have to logic, raises the question as to what the exact nature of the relation of both to the Semantic Web is which also has a notion of context. That is what I am working on now. But one way to think of it is that RDF - the resource description framework - describes resources logically that have state - the state that is the one talked about in REST (representation of state transfer). So we have a logic of states that allows distributed reasoning to take place. And this can also be described using category theory.
[1] I got to CT from OO via a very long route. In the 1990ies I starting with Java which I programmed at AltaVista. Later in 2003 I started working on the semantic web (RDF) - a declarative logic based web language. Around 2008 I moved from Java to Scala where I discovered pure functional programming and CT via the ancestor of the cats library ( typelevel/cats ). So as I mentioned I am coming to CT - which used to be pocked at as “abstract nonsense” in the 1980ies - via a pragmatic route. I found some concepts - such as that of Free Monad - very useful to build a web server for decentralised social networks, and so over time started looking deeper and deeper into it, and indeed to start a PhD as the question of this thread started making me wonder: how are these programmatic paradigms related: OO, declarative (RDF), and functional…
I hope for all our sakes that I can make this short …
In the latter part of the 50s John McCarthy got more and more interested in what he started to call “Artificial Intelligence”. He was also doing some consulting and this brought him in contact with the SAGE air defense system: large systems of very large computers attached to radar stations and each other and usable by graphical display systems with pointing devices.
John’s reaction was “Every home in America will have one of these”. He could see that the networked computers could be thought of as an “Information Utility” (as a parallel to the existing utilities for electricity, water, gas, etc…) and that the terminals in the homes could provide many kinds of “information services”. Among other things, this got him to advocate that MIT etc do “time-sharing” of their large mainframes …
He also realized that the computer milieu of the 50s — machine code and the new Fortran — did not intersect well with “most people in US homes”. This got him to write a paper in 1958 — “Programs With Common Sense” — and to suggest that what was needed for the user interface was an active semi-intelligent agent — the “Advice Taker” — that could interact with users in their commonsense terms, could learn from “taking advice”, could problem solve on behalf of the user and itself, and so forth (MIT AI Memo 17).
This got him thinking about how to implement such an Advice Taker, whose main mechanisms would be various kinds of logical deductions including those that required actions. There wasn’t much to go on back then but a few gestures at “list processing”, so he decided to invent a language that could be used to make the Advice Taker (and other kinds of robots), and more generally allow symbolic computation to take its place alongside the existing numerical computation.
John was an excellent mathematician and logician, and so he also wanted to come up with “A Mathematical Theory of Computation” to put ideas old and new on a firmer basis.
His result was LISP (for “LISt Processing”). I have written elsewhere about its significance.
Meanwhile, he was pondering just what kind of logic, math, and programming (he thought of these as highly intertwined) could be used to deal with a robot in the real world.
<eliminating detail here> A conflict was between at (robot, philadelphia) and at (robot, new york) which could not happen simultaneously, but could happen “over time”. This was like the problem of contemporary programming where variables would be overridden (and sometimes even files) — basically, letting the CPU of the computer determine “time”.
This destructive processing both allows race conditions and also makes reasoning difficult. John started thinking about modal logics, but then realized that simply keeping histories of changes and indexing them with a “pseudo-time” when a “fact” was asserted to hold, could allow functional and logical reasoning and processing. He termed “situations” all the “facts” that held at a particular time — a kind of a “layer” that cuts through the world lines of the histories. cf McCarthy “Situations, Actions, and Causal Laws” Stanford, 1963 prompted by Marvin Minsky for “Symbolic Information Processing”.
One of the ways of looking at this scheme is that “logical time” was simply to be included in the simulations, and that “CPU time” would not figure into any computation.
<more detail excluded here> This idea did not die, but it didn’t make it into the standard computing fads of that day, or even today. The dominant fad was to let the CPU run wild and try to protect with semaphores, etc. (These have the problem of system lockup, etc., but this weak style still is dominant.)
Systems that have used part or all of John’s insight include Strachey’s CPL, Lucid, Simula, etc. Look at Dave Jefferson’s TimeWarp schemes, Reed’s NetOS, Lamport’s Paxos, the Croquet system, etc.
To just pick just one of these, Strachey in the early 60s realized that tail recursion in Lisp was tantamount to “a loop with single simultaneous ‘functional assignment’ ”. And that writing it this way would be much clearer by bringing the computation of the *next* values for the variables together.
There are no race conditions possible because the right hand side of the assignments are all computed using old values of the variables, and the assignment itself is done to furnish new values for the variables all at once. (Looping and assignment can be clean if separate “time zones” are maintained, etc.)
More main stream is that big data systems used *versions* instead of overwriting, and “atomic transactions” to avoid race conditions.
Back to McCarthy and — now — objects. One of the things we realized at Parc was that it would be a very good idea to implement as much of John’s “situations” and “fluents” as possible, even if the histories were not kept very long.
For example, this would allow “real objects” to be world-lines of their stable states and they could get to their next stable state in a completely functional manner. In the Strachey sense, they would be “viewing themselves” with no race conditions to get their next version.
This would also be good for the multiple viewing we were starting to use. You really only want views to be allowed on stable objects (/relationships) and this can be done by restricting viewing to already computed “situational layers”.
Parc was also experimenting with “UNDO” and the larger community was starting to look at “parallel possible worlds reasoning”.
The acts of programming itself also wanted to be in terms of “histories and versions” and systems should be able to be rolled back to previous versions (including “values”, not just code). cf Interlisp, and especially the PIE system (done in Smalltalk by Goldstein and Bobrow).
This was another motivation for “deep John” in future systems. I.e. do everything in terms of world-lines and “simulated time”. A recent paper by Alex Warth shows some ways that “Worlds” can be quite fine-grained. http://www.vpri.org/pdf/tr201100...
The last point here is that “Histories R US”. I.e. we need *both* progression in time for most of our ideas and rememberings *and* we also want to reason clearly about how every detail was arrived at (and to advance the system).
John McCarthy showed us how to do this 60 years ago this year and wrote it down for everyone to read and understand.
So: both OOP and functional computation can be completely compatible (and should be!). There is no reason to munge state in objects, and there is no reason to invent “monads” in FP. We just have to realize that “computers are simulators” and figure out what to simulate.
I will be giving a talk on these ideas in July in Amsterdam (at the “CurryOn” conference).
Functional programming is about constructing software applications (programs) by combining components (modules) that model mathematical functions.
Object oriented programming is about constructing programs by combining components (modules) that model real world objects.
Implicit state of objects is an essential concept of OOP, whereas in functional programming, implicit state is strictly forbidden. (Mathematical functions do not have a state. State has to be modeled explicitly, and science very successfully did so for the last 500 years.) So OOP and functional programming are opposite styles (paradigms) of building software.
A functional programming language has features that support functional programming. A OO programming language has features that support the OOP style. In principle both styles may be realized in any programming language, with or without explicit support. (I wrote many OO-style programs in plain C and my first functional program in Assembly language.)
A programming language may support and encourage one of these styles or both, leaving it to its users when to use which. This increases the complexity of the language and makes it harder to use it.
The OO and functional programming style may even be combined in one application, written in whatever language, but it is not particularly easy to do so in a sensible way.
I think this a great question, and I think it’s because, recently, the terminology, and even the facts* are actually being determined by beginners, and not experts.
For example, everywhere you look, Java is ‘slow’, but all benchmarks show the exact opposite, it’s actually bloody fast, but the overall narrative is that it’s ‘slow’, so it’s slow.
Java is ‘old’, Python is ‘new’, but Python is actually 5 years older than Java.
So, getting back to the original question, yes, FP (maybe not pure FP) and OOP are good partners, I think languages like Swift are great examples of them working together, and complementing each other.
Swift adds some feature that help control mutability, and this even helps at a pure FP level.
FP and OOP really can be the best of friends, and shouldn’t be considered an either/or type thing.
FP is seen as the ‘opposite’ because the narrative is defined by bullshit rather than truth.
Thank you for the good, non-anon question!
*I mean this an ‘alternative fact’ sort of way.
To add to the other people’s point functional and OOP work together most of the time in C# you have linq now. Scala is another one. Do i don’t see them as being opposite because you would not be able to use them together.
The only place i know that really cares so much the programming paradigms is computer science books. I think they do talk about it in terms of paradigms mostly to make it easy for someone to learn all of them. I don’t recall reading any book or research saying that any of the paradigms are opposite of each other mostly they talk about the differences between the paradigms.
But that is changing within computer science you see a lot more research into multi paradigm programming languages. Because lets face it most working programming languages used in the real world are multi paradigm.
Comparison of multi-paradigm programming languages - Wikipedia
Still have a question? Ask your own!
