There are no Upsides to Object Oriented Programming

A story about blind acceptance of induced complexity

Mattia Maldini
9 min readNov 21, 2019

Once upon a time I was a Computer Science student just starting his first classes. Among the mandatory topics were C++ programming and Fundamental Logic. Those two courses introduced me to a curious dichotomy that would repeat itself time and time again in the academic setting: while one professor proceeded to teach us the principles of Object Oriented Programming justified as the industry standard for big, serious projects, on the other side his/her colleague would explain us that those were outdated techniques rendered obsolete by more modern approaches to programming.

This was a recurring theme in almost every year: Programming versus Logics, Software Engineering versus Theoretical CS, Compilers vs Emerging Programming Paradigms.

About this, I was mostly confused. Regardless of the take on the issue, professors would give cryptic answers to requests of clarification: there are different opinions on the matter, the general consensus is shifting, some practices are not considered good anymore.

Being mandatory courses, I studied everything with an open mindset; I learned and understood OOP principles and correlated software design patterns, I did my homework an passed the exams. Then, a few months after my bachelor’s graduation, an occasion came to put those to practice as I started working on a semi-serious Android app.

Revelation and Horror

At the time the only approachable option for Android development was Java, which is of course an heavily Object Oriented programming language. As the code size grew I started to organize my work in classes: Managers, Factories, Subtyping and the likes. About four thousand of lines of code in and I was tangled in a web of dependencies and relations that I just could not free myself from.

After a couple of months a staggering realization hit me: there is no way to make this better; I have to fundamentally change my approach. I need functions, not objects; modules, not classes.

While I was trying to wrap my head around the problem the project came to an halt (probably for the best). I had however learned a great deal from it: peeking into the abyss of Object Oriented Programming, I understood why every programmer in the world is shifting away from it.

There is No Gray Area

Fast forward a few years, a CS Master Degree and some field experience later, and I’m here. My beliefs on OOP have only consolidated, to the point that I would refuse any job offer including it up front. Not only that, it’s quite evident I’m not alone.

There are less and less programming languages that provide an OOP option among the newly created ones. Rust, Go, Elixir, Elm, they all openly declare no class support. Those that do are successful because of other additions, like closures or parametric polymorphism.

Whenever the issue is brought up online, most opinions are polarized towards leaving OOP for good, with articles and blogs that argue for more modern alternatives.

Sadly, there is still remaining groups of developers insisting that the Object Oriented Paradigm holds some good sides for those that use it correctly. This is a delusion, and one that damages its supporters the most. There is no room for debate: the only good OOP language is a dead OOP language.

“It’s the Industry Standard”

Let’s be clear: out of all jobs you can score in the software field most of them probably include some degree of OOP; there is no avoiding it. That, however, does not mean it is a good standard or one you should follow blindly.

After all, the banking sector is infamous for still relying on COBOL for worldwide digital transactions, but I’m quite sure no one is arguing for its glorious return.

There is a huge amount of legacy code that follows OOP principles, and we have to deal with it. However, not unlike asbestos or other dangerous materials, it should be disposed of in an orderly fashion and as fast as possible. Like many times in history, the number of people using the tool proves nothing about its effectiveness.

It is not possible to avoid every project that involves OOP in your career; you should however be well aware that it is, in fact, a problem. Do not delve into the OOP cancer more than strictly necessary and take every opportunity to distance yourself from it, even by the tiniest bit.

The biggest detrimental effect of this lingering legacy is how new languages feel the urge to follow in its footsteps to appeal to a wider developer base. Take Dart for example: a modern language with every useful feature, but unfortunately also Classes support. This means that all frameworks built with it have a preference for the OOP path, which brings us to unfortunate situations like Flutter’s widget management.

“It scales better in big projects”

You have probably heard this joke already.

I had a problem so I thought to use Java. Now I have a ProblemFactory

The point is, it’s not a joke. It’s funny only if you are not a Java developer; if you are, it’s a sad truth.

Many times I struggled to understand a design pattern before seeing it applied to a non-OOP language, where it was just the natural way to do things. Design patterns in Object Oriented languages serve as a way to navigate around their flaws to reach objectives that would be trivial otherwise.

90% of all the design patterns ever created for OOP programming are like this, unhealthy ways to cope with flaws. It is an over-engineering triumph that glorifies the worst parts of the Object Oriented approach by putting them on a pedestal and stating that it is the best way to get things done.

Except it’s not. I graduated in Computer Science; passed an exam in software engineering that covered exactly this topic; wrote thousands of lines of code in an handful of programming languages (C, C#, Rust, Lua, Elm, Python, Dart, etc…) in a professional environment; yet, I have no idea what problem the factory pattern is supposed to solve and how to implement it.

More than that, I believe I’m not alone. When looking up “factory pattern”, the first three images that come up are completely different graphs. Can any of you explain how it works by looking at them, or even how they conciliate?

Few ideas, but firmly confused.

“It’s the best approach sometimes”

I hear countless times this argument coming from decent programmers that work in specific fields. The idea is that in some scenarios objects are the best data structure to describe the desired behavior: for example, videogames are filled with replicated elements with similar properties (enemies, bullets, players and so on); the same applies to UI design (think about buttons, entries, labels,…).

That is true. Objects are extremely useful data types — objects as in collections of fields and methods. My point is that Object Oriented Programming is not about objects at all.

The founding principles of OOP are about classes: blueprints to create objects that can be extended (by inheritance) and polymorphic (with subtyping). Indeed, everything that is evil about OOP revolves around them:

  • Inheritance is simply useless and cumbersome, and is still used only as a mean to achieve subtyping (wrongly so).
  • Subtyping is the weakest form of polymorphism.

Want to use objects? Suit yourself, but you certainly don’t need classes. Seriously. I can’t stress enough how there is no room to argue here. Java’s author, James Gosling, declared in an infamous interview this very thing: if I could turn back time, I’d avoid classes. Draw your own conclusions.

“If you could do Java over again, what would you change?”

“I’d leave out classes”

It’s not just about inheritance and subtyping, classes are inherently bad on their own. Think about this: you are defining a way to produce multiple independent elements that contain their own information and work separately on it. Classes encourage to split data. Split data needs to be coordinated and kept updated; you need to find exotic ways to communicate between objects, even if they are on the same execution thread. This is a terrible practice that can lead only to the worst cases of spaghetti code.

“There are no alternatives”

As already mentioned, the Object Oriented Paradigm is still rooted in today’s programming practices. When I criticize it in front of a fellow developer, one of the most recurring objections is “how else would I do <trivial task> without OOP?”

Many people have only been exposed to OOP, so it’s natural they would not know other options — but said options are out there, and they are so much better.

For example, say you have to work with a framework that allows you to override and extend certain part of its logic to accommodate for custom solutions: OOP answers with inheritance to this problem. We now know it is a terrible answer, but what else do we have?

Well, what about simple callbacks? It seems to me overwriting a pointer to a function already solves all of it. When you remove subtyping, inheritance is really just that, isn’t it? This way you can even control which part of the library are customizable by exposing only certain callbacks (instead of juggling with private/protected fields and friend classes).

Even better, you can do this in pretty much any language. Even C has support for weak (i.e. overridable) functions and function pointers, which is more than enough. Want an example? LittleVGL is a UI library written in C that allows to define custom drivers for different displays through callbacks, and it works perfectly.

Is this too basic for you? Python’s decorators achieve the same objective — customizable functions — with a more modern twist. It is how Flask exposes the user-defined response to HTTP requests.

Still, inheritance is just a mean to achieve subtyping: but finding alternatives to that is even easier: pretty much any other form of polymorphism is better.

If the goal is to have multiple data types sharing a common behavior it is insane to have them collected under a single family tree. They just need to expose the required API. Call it as you may — interface, generic, trait, mixins— polymorphism does not have to be this cumbersome.

“The alternatives are not feasible”

Functional programming does not have to be Haskell

Possibly the most idiotic way to still cling to OOP is suggesting it that none of the alternatives are better.

First of all, the simplest alternative to OOP is not to use OOP at all. Just write procedural code — function oriented, if you will — you need nothing else.

Secondly, a clarification is needed on this “functional” approach. When talking about functional programming the first things that comes to mind are as-of-today exotic and academic programming languages like Lisp, Haskell or Ocaml.

This is not wrong, but it’s not entirely correct either. There are different degrees of the functional programming paradigm. To me, programming in a more functional way means to focus less on how the data is organized and more on what the software does with it.

Objecting that a stable OOP framework cannot be ditched for an obscure Haskell library is sensible, but there are many in-betweens. In general, using more functions and less classes will yield immediate improvements without any need to take a course on lambda calculus.

Then there are purely functional programming languages that make a point of being easy on newcomers, like Elm or Elixir. Hell, even Javascript has been striding towards the functional approach for years now. I encourage everyone to get close to them, because the future of programming is pointing straight in that direction.

Bottom Line

As much as I hate it, the Object Oriented Paradigm still exists.

There are reasons for that, but anyone arguing in its favor is hurting the programming community as a whole. Don’t mock yourself and stop indulging in harmful programming practices. You will be the first one to benefit from it.

--

--

Mattia Maldini
Mattia Maldini

Written by Mattia Maldini

Computer Science Master from Alma Mater Studiorum, Bologna; interested in a wide range of topics, from functional programming to embedded systems.

Responses (3)