abeppu 4 days ago

Maybe I'm just missing something, but I'm not on board with the definitions they take in the introduction.

> the essence of object-oriented programming is programming against interfaces, which correspond to the type theoretic concept of codata and copattern matching

They then use the classic codata example of a Stream, with a head and a tail. The Stream they declare looks a lot more like the version of that concept in functional languages than the version of it in an OO one, but clearly it exists in both paradigms.

https://www.cs.cmu.edu/~15150/resources/libraries/stream.pdf https://hackage.haskell.org/package/Stream-0.4.7.2/docs/Data... https://docs.oracle.com/en/java/javase/21/docs/api/java.base...

  • Twisol 4 days ago

    I think the key ingredient you're looking for is mutability. If you take advantage of the ability to implicitly modify state, your declared interfaces reduce in size because you don't need to model the threading of state through the interface. Since mutability means that your method signatures are less restrictive then they appear to be, it makes sense that this research (focusing on dependent types, and especially on dependently-typed proofs) would prefer a setting without implicit mutability. (Mutation is orthogonal to OOP vs. FP, anyway -- you can have pure OOP interfaces and also mutable FP interfaces.)

    The Java stream interface also has a lot of methods that can technically be implemented in terms of other methods on the interface, but are present so that implementers can provide more efficient implementations depending on the capabilities of their model of streams. It makes sense that the present research would only consider the bare essentials of streams, and not things that could be layered on top at the cost of some optimization opportunities.

    As a Java programmer, the essence of an OOP stream seems to be captured by the following interface:

        interface Stream<T> {
          T next();
        }
    
    If we make mutation explicit, this evolves into:

        interface Stream<T> {
          Pair<T, Stream<T>> next();
        }
    
    And then we can split this into two methods, providing each of the two components of the original `Pair`:

        interface Stream<T> {
          T head();
          Stream<T> tail();
        }
    
    This is exactly what the paper shows on page 2, up to syntactic differences (like explicit type parameters to the methods).
  • Tainnor 4 days ago

    Their definitions of OOP and FP are rather restrictive - it's basically FP = ADTs (data) and OOP = interfaces (codata). Many modern languages support both (e.g. Haskell has typeclasses, Java has records and sealed interfaces).

_glass 4 days ago

For me best described in, and also as a topic mentioned in the paper about the visitor pattern connection, "A Little Java, A Few Patterns". This book writes really impractical Java, but in such a Scheme way, that you really can understand the deep connections between functional and object-oriented programming.

  • liquid_bluing 4 days ago

    One of the most mind-bending papers on this topic is William Cook’s On Understanding Data Abstraction, Revisited https://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf “Object interfaces are essentially higher-order types, in the same sense that passing functions as values is higher-order. Any time an object is passed as a value, or returned as a value, the object-oriented program is passing functions as values and returning functions as values. The fact that the functions are collected into records and called methods is irrelevant. As a result, the typical object-oriented program makes far more use of higher-order values than many functional programs.“

    • trealira 3 days ago

      This reminds me of this article: "The visitor pattern is essentially the same thing as Church Encoding" [0], although I think this comment [1] and this one [2] on HN explain it better. The article is written in Haskell, but it's basically saying that, because object-oriented programming languages lacked sum types, the visitor pattern makes the transform from

        (a | b | c) -> T
      
      to

        (a -> T, b -> T, c -> T) -> T
      
      where (a | b | c) is a sum type saying "it can be either a of type a, type b, or type c", and (a -> T, b-> T, c -> T) is a stand-in for a record of functions (i.e. the v-table of an instance of a visitor class).

      [0]: https://www.haskellforall.com/2021/01/the-visitor-pattern-is...

      [1]: https://news.ycombinator.com/item?id=26034790

      [2]: https://news.ycombinator.com/item?id=26024600

      • trealira 2 days ago

        Looking back but can't edit. The first line should say

          ((a | b | c) -> T) -> T
        
        This is just a mistake I made paraphrasing link 1.
    • bruce343434 4 days ago

      I think the best example of this is the strategy pattern. In that way, strategy classes are really just a wrapper around a function and its persistent, mutable internal data/state.

    • abecedarius 4 days ago

      Whenever there's another interminable thread about, like, what is OO really and how thoroughly does it suck? I wish everyone knew this short paper as background.

      Teaser: the first OO language was Church's lambda calculus.

      • voidhorse 4 days ago

        But OO does suck, not because it is fundamentally different from the lambda calculus, but because it puts a kludgy set of handles that makes it easy to make bad decisions in front of a good, sound computational core. Of course, just like any advanced tool, an advanced practitioner can use traditional OO languages well, but the paradigm makes it just as easy for noobies to fall into pitfalls as does classical imperative programming, if not more so.

        At the end of the day, all languages that are turing complete are the same language and the only differences lie in the kind of front end we provide. Unfortunately, we basically still have a single frontend, called C and every subsequent programming language has essentially just taken the C frontend and restricted or expanded it in small ways. We still ultimately work in terms of records, contiguous arrays and pointers. You can think of basically any more "advanced" construct in terms of pointers and it will make perfect sense nearly every time.

  • agumonkey 4 days ago

    That's what I get from studying FP and smalltalk. There are strong similar notions behind the paradigms. The dirt came from the industry.. they started to apply OO for large engineering projects and patterns became a fad.

peterbb_net 4 days ago

Nice paper on an interesting topic. I’ve not had a chance to read the paper thoroughly, but the judgmental equality ended up weaker than I anticipated. I’d hope we’d end up with alpha-equivalent codefs being judgmentally equal.

Which makes me wonder what the next steps for a proof assistant based on this is. Will the de-/refunctionalization play an active role in the proof assistant as well, thus solving it as described in section 4.1?

grafs50 4 days ago

Interesting, I wondered what it would even mean to derive a programming paradigm.