You may be doing FP already

Functional programming (FP) has been a somewhat hot topic in recent years in my surroundings. Although in my close surroundings people are usually not zealous or overly excited, I do notice sometimes talks, discussions and vocally expressed opinions where tremendous benefits of FP over other paradigms are claimed, through presenting FP as a fundamentally new approach to writing code, opposite to, say, OOP. Some go as far as saying, that one must fully ditch OOP and all related experience because it’s all just a mistake.

Such extreme claims are rarely challenged. When a person with an OOP background gets involved in any FP-related discussion, it quickly gets overwhelmed with new unfamiliar vocabulary: algebras, monads, semigroups, functors, type classes…​ And I start questioning intents of those claims — are they made to make people shift to FP, or to prove, that some definition of FP is better than some definition of OOP?

With this post, I hope to show that FP can be seen as an evolutionary step, a generalization over practices we already consider the best in OOP paradigm. And if viewed like this, it can be useful even when adopted gradually. I hope this point of view will persuade programmers to look into FP and to borrow new techniques from it to improve non-FP code.

FP concepts we already use in OOP

I want to examine concepts, that are claimed (by some) to be unique to FP but are actually used in non-FP code too.

Pure functions

And for starters let’s examine what probably is considered the core concept of FP — pure functions.

A pure function must satisfy 2 properties:

  • its return value is the same for the same arguments,

  • its evaluation has no side effects.

Let’s consider a simple application.

public class TravellingApp {

  private static void errExit(Throwable t, int code) {

  public static void main(String[] args) {
    try {
      Cli cli = parseCli(args);
    } catch (CliParsingException e) {
      errExit(e, 1);

    Config config
    try {
      config = readConfig(cli.configPath)
    } catch (IOException ioe) {
      errExit(ioe, 2);

    try {
    } catch (WorldIsFlatException wife) {
      errExit(wife, 3);
    } catch (TimeoutException te) {
      errExit(te, 4);

Even if we ignore verbose errors handling this code is horrible because it is hard to test. One could assign custom PrintStream to System.err and System.out to verify what program prints, but verifying exit code becomes super hard (on top of that, System.exit() will cause the JVM running the tests to exit).

A simple way around that is moving all the logic from main to some run method with a more testable signature, where exit code is returned — int run(String[] args). To avoid the global mutable state in our testing code (System.out and System.err values), we may want to pass PrintStream instances in our run(), and we’ll end up with this:

public class TravellingApp {

  private static int exit(PrintStream ps, Throwable t, int code) {
    return code;

  public static void main(String[] args) {
    System.exit(run(args, System.out, System.err));

  public static int run(String[] args, PrintStream out, PrintStream err) {
    try {
      Cli cli = parseCli(args);
    } catch (CliParsingException e) {
      return exit(err, e, 1);
    return 0;

Now, looking at this code, an OOP programmer would say: «run() is more testable». An FP programmer would say: «run() has fewer side-effects».

run() is still not pure because it changes the state of its input arguments out and err. One could make it pure by making run() to return a tuple (or for the lack of those in Java, a class) of an int to serve as an exit code and 2 strings to be printed to out and err streams. But I believe, an average OOP programmer would not go that far.

At this point, it’s anyway easy to observe, that testability is achieved through eliminating side-effects. Pureness is a stricter requirement, of course, in OOP code you’d see many more testable functions than pure functions (or if you wish computations). But pure computations do happen in the OOP code.

It is interesting, that testability lies in the heart of an extremely popular technique among OOP programmers — dependency injection (DI). I still occasionally meet people, who associate DI with magical frameworks like Spring and Guice. It is, however, a simple technique: inject all dependencies needed for your computation, do not let your computations produce those dependencies.

class NonDiService {
  Dao dao = new Dao();

class DiService{
  DiService(Dao dao) {
    this.dao = dao;

service = new NonDiService() and service = new DiService(dao) are manifestations of the same pattern we saw with main(args) and run(args, out, err), and I hope it is evident that DI converges to pure computations and may occasionally result in pure computations.

OOP does not require pure computations, it is usually enough to have a testable computation. The set of testable computations is a much broader category than the set of pure computations. But

testability, like pureness, is achieved through eliminating side-effects, and occasionally pure computations happen in OOP code.

Immutable data structures

Immutable data structures are an inherent feature of FP. However, saying that you don’t have this in OOP would be just silly. Here are just a few examples from Java world:

  • Joshua Bloch in his Effective Java suggests minimizing mutability[1],

  • extremely popular AST sugaring library Lombok offers special annotation for generating immutable value classes @Value [2],

  • in many companies (most I worked for) Java CI pipeline integrates some static analyzers, like FindBugs, that will remind you to make defensive copies of non-primitive types passed as arguments to avoid unexpected mutability,

  • immutable collections are offered by standard library and many 3rd-party libraries (Guava, Eclipse collections)…​

Immutability may be not as ubiquitous in other paradigms as it is in FP, but it is certainly not an alien there.

OOP programmers are well familiar with the concept of immutability and understand its benefits and its price very well.

Avoiding inheritance

In one of his blog posts John A De Goes says[3], that

object-oriented programming—by which I mean inheritance hierarchies (typified by the Scala collections inheritance hierarchy) and subtyping (beyond its use for modeling sum types, modules, and type classes)—isn’t useful.

I won’t argue with the claim. But I want to highlight, that the target of his claim is inheritance, not, strictly speaking, OOP. Yes, many books say, that inheritance is one of the pillars of OOP, but some don’t.

Alan Kay, who is considered the inventor of OOP, starts his (quite hard to grok, to be honest) response about inheritance with this[4]:

I initially liked the idea — it could be useful — but soon realized that something that would be “mathematically binding” was really needed because the mechanism itself let too many semantically different things to be “done” (aka “kluged”) by the programmer.

Polemics about definitions are rarely productive, let’s see what happens in practice. Joshua Bloch argued for composition and delegation over inheritance almost 20 years ago too[5]. And maybe, 10 years ago I would still occasionally see 7 levels of inheritance in the new code, but people don’t do this anymore. I’ve seen Java projects, where programmers preferred code duplication over inheritance, and they believed they were doing OOP. Modern non-FP languages (Go, Rust) do not even offer this design facility.

Inheritance just proved itself practically inconvenient in many cases, and this is not unique to FP.

How OOP design patterns relate to FP


Let’s consider the infamous Factory pattern (Abstract Factory to be precise)[6]. It is useful when we have to construct an object from some data. However, there are 2 complications:

  • the data is scattered through the different stages of program execution (part of the data may be even available in compile-time only); and

  • the object has to be constructed at the execution stage, where we don’t have access to full data.

interface ShipFactory {
  Ship build(int capacity);

class BoatFactory implements ShipFactory {
  BoatFactory(WoodSpecies ws) { ... }
  public Ship build(int capacity) { ... }

class YachtFactory implements ShipFactory {
  YachtFactory(DriveType dt, PremiumPackage pp) { ... }
  public Ship build(int capacity) { ... }

class TravellingApp {
  public static void main(String[] args) {
    ShipType st = getShipType(args);

    ShipFactory sf;
    switch st { (1)
      case Galley:
        BoatConfig c = readBoatConfig(getConfigPath());
        sf = new BoatFactory(c.woodSpecies); (2)
      case Yacht:
        YachtConfig c = readYachtConfig(getConfigPath());
        sf = new YachtFactory(c.driveType, c.premiumPackage); (2)


  static void assembleExpeditions(ShipFactory sf) {
    while(int newCrewSize = readCrewSizeBlocking() > 0) {
      Ship s =; (3)

At we no longer care about the concrete factory type and may happen in a different class, be invoked in a different thread and much later, than where a factory is constructed according to the desired type we inspect at .

Combined together and form a curried function

new BoatFactory(c.woodSpecies).build(newCrewSize);
new YachtFactory(c.driveType, c.premiumPackage).build(newCrewSize);
def boatFactory(ws: WoodSpecies)(crewSize: Int): Ship
def yachtFactory(dt: DriveType, pp: PremiumPackage)(crewSize: Int): Ship

Notice, that in the original OOP code and can’t be scattered in code like and can. If we wanted to do that, we’d need another factory (and some refactoring to make types play well, but that’s a different story). This is how you get ShipFactoryFactory, and one can continue this infinitely long. An important outcome here is that

a factory is 2 functions chained together into a polymorphic curried function.

Composition of factories like functions is also technically possible, however, it rarely happens in practice (and is frowned upon) due to convoluted mind-bending semantics and the amount of boilerplate code such composition generates.


Strategy pattern[7] is a simple example because it is just a function passed into another function. Strategy is the way OOP does function pointers given the lack of such abstraction as function.

I picked this example only to illustrate an interesting feature of the OOP patterns.

interface NavigationStrategy {
  Route navigate(Point origin, Point destination);

class UseWindForceStrategy implements NavigationStrategy {
  UseWindForceStrategy(Forecast f, float desiredEfficiency) { ... }
  public Route navigate(Point origin, Point destination) { ... }

class ShortestRouteStrategy implements NavigationStrategy {
  ShortestRouteStrategy() { ... }
  public Route navigate(Point origin, Point destination) { ... }

A strategy may be parameterized on creation. This makes it similar to Factory — it can be represented as 2 functions chained into a curried function.

new UseWindForceStrategy(forecast, efficiency).navigate(from, to);

However an OOP programmer wouldn’t use a factory where strategy is expected! OOP patterns differ not only by how the computation is performed, but also by the ways these computations can be combined with the rest of the code and the ways these computations are mapped to the application’s business domain. Adapter or Decorator would immediately tell you, how to use it and that it’s likely a technical artifact that can not be mapped to the business domain, unlike f.ex. Strategy.

OOP patterns convey a wide(r) set of meanings, and this is one of the reasons why there are so many of them.


My favorite example would probably be Command pattern[8].

inteface Command { (1)
  void run();
class HalfAheadCommand implements Command { (2)
  HalfAheadCommand(Engine engine) { ... }
  public void run() {
class StopCommand implements Command {
  FullStopCommand(Engine engine) { ... }
  public void run() {
class DropAnchorCommand implements Command {
  DropAnchorCommand(AnchoringService anchoring) { ... }
  public void run() {

class ShipControlFacade {
  List<Command> fullStop() {
    Command stop = new StopCommand(this.engine); (3)
    Command dropAnchor = new DropAnchorCommand(this.anchoringService);
    return Arrays.asList({stop, dropAnchor});

interface Dispatcher { (4)
  void execute(Command c);


List<Command> commands = shipControlFacade.fullStop();
for (Command cmd: commands) { (5)

Command interface represents a sub-program and lets a calling side to invoke arbitrary sub-program by calling run() method on it. The run() method may return something if needed or accept arguments. That interface may have anonymous ad-hoc implementations or its implementations may form a union (GADT) .

Complex actions in the program are combined from small simple commands . The pattern does not prescribe whether commands should be combined using collections or custom combinators, whether they should be chained passing results of execution to each other or whether they should be isolated. It’s important, that sub-programs these commands represent are not executed immediately, i.e. side-effects are suspended.

Commands are then passed to a Dispatcher , which executes passed commands, but may also perform some actions before or after every command, may decide to execute commands concurrently or, on the contrary, serialize concurrent access…​ This the point in the program when suspended side-effects are finally executed.

As with previously examined patterns, commands are just function pointers, but in FP there’s a special name for functions that are used to suspend side-effects — IO, or more often IO Monad because that construct does have monadic properties. IO Monad implementation together with all the infrastructure forms an effect system. In some languages, like Haskell, effect system is baked in the language, in others, like Scala, effect system is shipped as a library, and in the latter case resemblance with the command pattern is just striking, especially if you rename Dispatcher to Runtime and Command to IO.

Of course, Command pattern in its original definition is not as powerful and flexible as, say, modern effect systems for Scala, but I’ve seen very sophisticated implementations of this pattern in object-oriented Java code, with commands being monads and all that.


Indeed, the basic unit of abstraction in FP — a pure function — is arguably better than some others. It is easier to build higher-level abstractions with pure functions, keeping the code clean and semantics easy to understand. But it does not mean, that there’s nothing in between the side-effectfull mutable OOP and idealistic FP.

When it comes to solving real-world problems with code, FP or OOP, mutability or immutability, etc. are all false dichotomies[9].

FP, like any other programming paradigm really, can be seen as a collection of techniques. It is good to be influenced by it, to make the best of it, but it’s not mandatory to dive into it fully. Java and C++ borrowed some concepts from FP recently. Already mentioned Rust and Go do not offer inheritance, allow higher-order functions, employ error propagation models that result in complete functions, Rust allows parametric polymorphism and offers many monadic structures in its standard library. All those languages are far from idealistic FP, and yet, they are used successfully to solve real problems.

Your 10 years old legacy project at work doesn’t need to be rewritten in a new language to benefit from immutability or pure functions. Quite possibly, as the title of this post says, your project already benefits from some techniques which are traditionally attributed to FP.

1. Joshua Bloch, Effective Java, Minimize mutability
2. Lombok @Value annotation —
3. John A. De Goez, Data Modeling in FP vs OOP
5. Joshua Bloch, Effective Java, Favor composition over inheritance
9. Scale By The Bay 2018: Bill Venners, Frank Sommers, Effective Scala, 22:54 —