I can highly recommend this book! I even own a signed copy :)
There are many other great Lisp books, but this has been written in more recent times by an author who has before written books about other languages, e.g. Java. As a consequence, I think it is much more hands-on and easygoing for people who had learned some of the mainstream languages first and the examples are also more of this age.
It is an excellent and thorough introduction into the language and in contrast to many older books also has very valuable sections about OOP and condition handling.
I went through Practical Common Lisp; it's great.
I made the mistake of assuming the publisher meant something and bought Practical Ocaml, which was awful. :(
Hey, thanks everyone for your kind comments on the book. Kinda amazing that we're only a few years away from it's 20th anniversary. -Peter
This book got me hooked on lisp years ago, I can’t thank you enough for writing it. It made both Common Lisp and emacs lisp accessible to me and I would not have learned them if the book did not exist. I hope you consider writing another edition someday.
i remember last year there was some mention about a new edition coming out. is that in the works or planned ?
Earlier this year, I decided to learn Lisp for kicks and Practical Common Lisp was the book that resonated with me the most.
The whole experience didn’t result in me switching or even continuing to use Lisp, but what it did is show me some of the origins of the features in the languages I already use.
Trivia: In my studies, I was surprised to learn that recent versions of Excel support lambda functions in formulae.
Python will probably forever remain stunted for its bad lambda syntax, allowing only 1 statement in it. It is not a deal breaker, but annoying to make named things out of anonymous things. Then there is the issue of no TCO. At least JS has that in the spec, even though almost no one implemented it.
The Python package lambdex  provides a way to write multi-line anonymous functions in Python, with a syntax that's close enough to the "normal" Python.
Interesting. This looks weird though, because now there is `def_` (not `def`) and `lambda`, instead of only needing `lambda` or only `def`. I am guessing you would also need to `def_` from somewhere. And you need to change a lot of parens inside to brackets, making it even more unnatural in Python sense.
I eventually realised that Emacs Lisp ideally assumes you have knowledge of Common Lisp, so I read PCL and now I feel like I can actually write Elisp and know what's going on
A couple of old threads with significant commentary:
i learned lisp from this book almost a decade ago.
about two thirds of the way through it gave me enough ideas and material to write two applications that i a, still using today.
This book is so useful and awesome I can't even thank Peter Seibel enough for getting me through my Lisp class.
I learned Common Lisp with this book. I read other books, but this is easy to understand and well presented.
Must OOP be shoved down everyone's throat at every turn and opportunity? One of the great things about Lisp is that it's a champion of functional programming, whereas OOP is extremely complicated and it produces truly horrible machine code which needs a lot of CPU cycles and even more memory, not to mention being unnecessarily difficult to understand and debug!
The nice thing about Common Lisp is, you have the choice, which style of programming you use. There is a lot of good things to be said about functional programming, but for a lot of problems, OOP is the natural pattern. And with Common Lisp, you can use both in the same program, depending on which pattern fits the best for that part of the program.
Mind you, I am not using OOP in that sense it has degenerated to in the Java universe. It is actually less the language itself but the culture of trying to express too much in object hierarchies and protocols.
I try to keep my object models simple, avoiding too much inheritance and complex class hierarchies. But it is a wonderful method of decoupling routines from heterogeneous data, which is well expressed as object with their methods implementing the common behavior.
Not needed. Book about Common Lisp without CLOS does not make sense. BTW, in this book just 2 chapters out of 32 are devoted to OOP.
If you actually work on a project of significant size I don't see how one can reason about code that's not encapsulated via classes.
I used to contribute to RunUO, an emulator for Ultima Online, and it's near the 1000000-SLOC level. There's too much going on, too much state, too many corner cases, to consider functional as an architecture.
Not every OOP project is full of FactoryFactorys.
> If you actually work on a project of significant size I don't see how one can reason about code that's not encapsulated via classes.
I can: Modular code using modules, instead of shoe-horning everything into classes. There are way fewer cases, where classes have actual good reasons for existing in a program than people think. Classes should not be the first go-to solution for grouping things together. First one needs to think about behavior and state. Do I even have a state, that needs to live inside an object over the time, that the program runs? If I don't have state, no class. Done. Similarly for when I only have state and no behavior, that needs to be put together with that state. Often a module, which exports functions, which deal with that state, is more than sufficient and does not allow for inheritance nightmares.
> I used to contribute to RunUO, an emulator for Ultima Online, and it's near the 1000000-SLOC level. There's too much going on, too much state, too many corner cases, to consider functional as an architecture.
Sounds exactly like something, that should not be done in the typical OOP everything-calls-everything way, because then you will end up with lots of state changes mutation happening everywhere. It will not even be clear to the implementers of the system. Therefore you will not know what to test for and therefore it might run accidentally, but not in a way, that one can be approximately be sure to be correct. Every test of a complex scenario will require loads of setup test code, so that you can get some kind of environment, which might be similar to what happens in the system. Designing in a functional way would give you a sort of "entrypoint" at every function to test it, giving all required state as arguments.
I don't think OOP is a given in any big project, especially, when looking at how to write unit tests for mostly everything, and when looking at parallelizing stuff. I see OOP perhaps when it comes to building GUIs, but even in that area attempts are being made to use declarative approaches and functional approaches, so maybe in the future we will see OOP lose ground there as well.
> I can: Modular code using modules, instead of shoe-horning everything into classes
It's your lucky day then, as this is how CLOS programs are written. Methods are associated with generic functions, which are in turn associated with packages.
Did you just generalize that every large project must be object-oriented?
I'm positing that any project of sufficient complexity is a good candidate for object oriented, yes.
I don't see how a single developer can reason about an entire codebase of hundreds of thousands of LOC.
You should probably try writing something under a functional paradigm, emphasizing immutability and pure functions. You'll quickly see how that will help you reason about a large functional project.
Functional programmers argue that pure functions are easier to reason about than functions that drag in state (like methods in OOP tend to).
Please list some real-world software of significant complexity that is 100% purely functional.
Where does the "100% purely functional" requirement come from? By some strict definition, this is impossible, because you'd not have any means of communicating the result of your computation.
In pure OOP speak: A "final static" method is easier to reason about than some non-static, non-final method. (But when a method is defined as "final" and "static", the providing class only serves as a namespace - where is the OOP in that?)
Hasura, Pandoc, XMonad?
Having learned a number of Lisp systems in the past, I wouldn't necessarily recommend ANSI Common Lisp as a first choice for a Lisp unless your needs are very particular, because it is an enormous design-by-committee language having a draft standard of about 1360 pages:
This means that in addition to the time spent doing the programming that solves your technical problem, you also have to devote considerable time to language lawyering, investigating if the interpretation of a Lisp expression your Common Lisp produces is or is not standard conforming, using only the frequently ambiguous English of that enormous standard as your guide.
The Java JVM was carefully designed to give identical results on all hosts for the Java platform unless you go out of your way to get nondeterminism or platform specific behavior (make the value of the number N depend on thread scheduling, or use JNI that assumes a platform byte order, etc.). C/C++/Rust allow undefined behavior if you shift a 64-bit int more than 64 bits, but Java on the JVM for example masks the lower 6 bits of a 64-bit shift ('& 0x3f') so that you get the same result on all CPUs rather than a CPU-dependent result:
The nice thing about this is that the JVM makes a "try it and see" approach more viable: if your program has a bug at least it has the same bug everywhere. You won't suddenly get a crash 10 years in the future when your customers upgrade to a new CPU, because your Java bytecodes on their new CPU will faithfully maintain bug-for-bug compatibility.
Clojure is a Lisp that runs on the JVM. I haven't personally used Clojure, but having done quite a bit of Common Lisp, Emacs Lisp, Scheme, etc., it looks to be very well designed and very well loved by its users, and using it could spare you from having to language lawyer the ANSI Common Lisp standard, as it seems to be more "try it and see" friendly.
I've been learning a lot about formal verification for C programs, how to truly make code that is bug free, and to do this you need to first make certain decisions about how you are going to formalize the language standard. E.g. will you assume that CHAR_BIT==8 or will you allow CHAR_BIT>=8 because the official ANSI/ISO C standard allows this even though all modern computers have CHAR_BIT==8? Then for any program you input to your verifier you must judge whether all its behavior is well-defined or if there is undefined behavior (arithmetic overflow, etc.).
There are quite good formal verification tools for Java, also, like JBMC and Krakatoa, and smaller Lisp languages are traditionally among the least tedious to formally verify (very simple semantics, unlike Common Lisp), but the time investment to learn these tools is enormous.
The Java VM Spec is almost 650 pages.
The core Java spec is another 850 pages. The core Java libraries add countless more pages on top.
The "CL is big" myth is a strange one. Big was in context of a 80286 in 1982. It already wasn't big by the 90s and a full-blown CL implementation is absolutely TINY compared to pretty much any other managed language you can think of.
I can't speak to the VM spec, but the 90s Java spec and the Common Lisp spec were both largely written by Guy Steel (who said that Java was meant as a stepping stone to shift C programmers a little closer to Common Lisp).
ISlisp and EUlisp both attempt to "correct" the Common Lisp language. Neither has been successful and the big reasons are that CL's core features aren't actually broken and the biggest complaint (naming conventions not always aligning well) can be completely fixed with macros if you want (there are dozens of such projects).
If one wants a "fixed" CL they could look at Scheme. Sadly, that language didn't become even as popular as CL.
> If one wants a "fixed" CL they could look at Scheme.
CL is almost a decade newer than Scheme, so if one believes Scheme to be the superior one, it would make more sense to call CL a broken Scheme than Scheme a fixed CL.
Latest Scheme standard is much more recent than the CL one, though. Thus my point still stands.
You can't seriously say that, just because one targets a well-specified machine, that the language being used is well-specified. The determinism of a Clojure-on-JVM program would also be dependent on the particular code the Clojure compiler generates. The code has to remain semantically identical between compiler versions, and it should not introduce any new non-determinism, e.g. that presented in . In Common Lisp there is the Armed Bear Common Lisp implementation, which runs on a JVM. Does it benefit from JVM determinism or not? It probably does not, because the JVM is simply not aware of undefined behaviour that ABCL or Clojure are implicitly defining.
When it comes to having different platforms, it would also be necessary for any other compilers to generate semantically identical code. Different Clojure systems do _not_ do that. For example, arithmetic in ClojureScript uses JS floats where Clojure-on-JVM and others use integers of some size.
In my experience, writing a non-conforming CL program is hard, and much harder than writing a program without undefined behaviour in C. I am not sure why, other than the UB being more "localised" in some vague way. But there is also a modification of the ANSI standard being worked on, which attempts to eliminate undefined behaviour <https://github.com/s-expressionists/wscl>.
Kind of, there is some fun tracking down JVM specific implementation behaviours.
JIT, GC and extension differences do give space enough for head scratching, although not as much in other ecosystems.
If you have a second, I'm curious to learn more about your head scratching experiences with JVM. I want to make a program that I can trust will still run the exact same (bug-for-bug compatible) many years in the future without maintenance. One approach is to make it completely bug-free using a formal verifier for a strict formalization of C, but this is extraordinary effort and there is no guarantee that bugs in the stack of garbage my app sits atop and the libraries I call (SDL2?) will cause unwanted user observable behavior. Truly bug-free is actually stricter than what I really need; I just need exact bug-for-bug compatibility so that my bugs always are deterministic. It seems with the JVM at least, the bug-for-bug determinism is really good (except when it obviously isn't and can't be, like thread scheduling, network communications, ...). For client GC there is a low-latency guarantee and people seem happy. Have you found the Java GC is not all it's reputed to be? There are so many huge companies with billions invested in Java and its bug-for-bug compatibility, I think it could easily still be around in a 100 years along with COBOL and is a safe investment for individuals who value longevity above what is trendiest and shiniest.
Then dive into the public documentation of each of those listed there, specially the ones that aren't plain rebranding of OpenJDK.
Portability of JVM is a myth. There are many small differences between implementations, platforms, toolkits. Moreover, while Java language continues to improve the syntax, many projects stuck with the old versions of it, and seem not in hurry to upgrade, thus preventing the use of them.
Clojure seems like the best choice if you want to use lisp in a professional context but is a bit of a double edged sword if learning lisp for exploratory purposes. When things go wrong with Clojure you get back confusing stack traces from the JVM and you have to work through 2 separate paradigms to figure them out. Common Lisp on the other hand has a baked in live interactive environment that lets you work through errors as they come up. This is such a unique (and productive) way to work that it’s addictive. That said Clojure’s api is just way more intuitive, probably due to when it was created.