Why use racket




















For instance, a jpeg file. Plain text files are not languages. For instance, the BNF grammar for the bf language looks like this:. Why do it again? With brag , however, our wish comes true. Another example is my DSL Pollen , a language for making online books including this one.

The nicest part? So a source file like this:. Not if you have the right tool. But a lot of this book is about the joy of Racket macros. For now, the two standout features:. See syntax objects for the details. Racket macros are hygienic , which means that by default, the code produced by a macro retains the lexical context from where the macro was defined. See hygiene for the details. Once was enough. We try to avoid being constrained by language-specific ideas and constructs as much as possible, in hopes that the programming skills you learn are applicable regardless of whatever future programming language you happen to be using.

Of course, to learn program design, we must write actual programs. Thus, we must choose concrete tools to program with. The grammar is extremely simple. Thus we avoid spending unncessary time learning irrelevant or language-specific forms e. It comes with teaching-centric languages and libraries.

This enables writing realistic, interactive programs from scratch in a short amount of time, yet still forces students to focus on and understand the design of their program. It is a "multi-paradigm" language, so we are not restricted to a specific programming style. Alternatively, with other languages like Java, where "everything is an object", all programs must be squeezed into a particular paradigm.

However, rather than study the hyperspecialized functional-programming-in-Java, we can learn much more about program design by studying concepts, e. A multi-paradigm language also enables smooth transitions between different programming styles, e. It is untyped. At a high level, The Program Design Recipe is about designing datatypes and following their shape when writing code.

Tree shakers are not written because they 'are cool', but because one wants to deliver small applications with it. But that's not the focus of SBCL development and never was. People can't tell you much about it, because there were only few attempts made in that direction for SBCL. Some Common Lisp implementations have tree shakers - especially the commercial ones. But it's a rarely used tool. Best not to generalize from an IRC channel to a very diverse group of people using a dozen different implementations.

Also, one of the most curmudgeonly responses you could have given. Sometimes there is funding from commercial projects. So far maintaining a treeshaker wasn't high on the agenda, even though the project runs for some years now. If you shell out serious money for your 'very serious business' the commercial implementations Allegro CL and LispWorks provide maintained tree shakers and all kinds of fancy application delivery features. You might check out the link I've gave you above, instead.

I may not have made myself clear. I mostly wanted to know how to push them past that point into being robust. Tree shakers require that you look at every function-bound symbol and determine if that symbol appears in the call graph of the main function s or in an isolated call graph. The functions that follow the call graph exist in most Common Lisps but they are not standardized; they're implementation-dependent.

That's part of the reason tree shakers aren't widespread. Another reason is that if you're using the compiler in your runtime, even isolated parts of the call graph can still be useful, so tree shaking becomes something of a policy decision rather than a pure algorithm. They are just symbols in a list in the code. That's what I was getting at but your example is better because the compiler isn't really even being used at runtime here.

That's why we tend to sound curmudgeonly about the issue. It's a valuable learning exercise, so feel free to dig in. You'll gain some valuable insights into what makes Lisp special. Hmm, the usual treeshaker might not tell you much about the internals of the compiler itself a compiler is just one part of a Lisp system based on runtime, interpreter, various libraries, etc. It's an interesting project, but not that easy for a complex piece of software like SBCL - depending on what the goals are.

All the CL people I've worked with have been amiable. Though, I suppose you could say: what high-powered hacker wouldn't be in a good mood, if they were getting paid to hack Lisp. I don't recall any curmudgeon behavior in-person, but a bit "critical" is often a useful role for an engineer to play, if they can back it up and discuss. A useful mode of engineering discussion involves people making assertions, thinking aloud, and being challenged, and together you improve the ideas and generate new ideas.

Sometimes it's appropriate to suddenly look at each other and start jumping up and down and shouting, like you're in a movie, because you've just hit on a solution that has passed your preliminary tests of critical thinking. If you're jumping up and down all the time, I suppose that could turn into incestuous amplification. The Scheme and Racket communities are also good. Once there's money happening, you'll get more of less-desirable behavior from some individuals and groups: promotion of personal brands, posturing and jockeying for the opportunities, SEO games and huge amounts of Web search hit noise from that, marketing puffery rather than engineering straight-talk, non-sharing, sometimes commercial landgrab games with the platform itself, sunny-sociopath workplace cultures, etc.

Not that a community can't be great even when there's money involved, but "really, there's no money in this -- it's only for the merits and community" seems to scare away a lot of behavior, and the people who are attracted anyway set a tone. I can't say anything about its quality cuz I dont do CL.

Yeah they're kind of assholes. Racket has a much nicer community, though. Yeah the racket community is super sweet. What can I say though, I like the Common Lisp experience. I should give racket a second shot though. I love Chez. Kudos to stylewarning. One of the benefits of Lisp based languages is that they usually come with powerful macro based meta-programming facilities.

Over the years, I've lost my enthusiasm for powerful meta-programming facilities like Lisp macros. The underlying languages are Turing complete and don't strictly need meta-programming, and most modern languages aren't lacking in abstraction mechanisms available to programming without meta-linguistic alterations.

Like operator overloading, sophisticated macro systems change the semantics of program source code in ways that are not obvious. They allow new variants of the programming language to be created willy nilly placing demands on me the reader, maintainer, or user of a programming language package to fully understand the implementation of the meta-linguistic features. Powerful macro systems encourage a thick frosting of magic to be applied on the implementation of complex systems.

Some systems, like Lisp or Scheme or TeX, would be difficult to use without macro extensions, but it seems to me that identifying a good set of built-in abstractions for writing programs and building the language around them is a better approach. I am so grateful for the TicZ graphics package for LaTeX, it's all built out of TeX's crazy flexible macro system, but I'm even more grateful that I've never had to touch the source for it.

Take a peek at: [1]. I think it all comes down to what kind of programs you're writing. For example, without a powerful macro system, something like the nanopass framework [1] would not have been possible. Sure, you could write an external code generator, but at that point you've just implemented a bad macro system. I only learn new languages to be able to do something new.

Then Pascal to have complete flexibility of data structures. Then I learned Java because you could do applets with that. Then HTML because you could make the browser display what you want. Then Python because PHP sucked for console programs and JS because you could make things happen without bothering the server.

Then C because that was the comfiest way to make desktop apps since Delphi kicked the bucket. Then I learned Ruby because I was assigned to project written in it, but promptly forgot it since then because it could do only the things Python could already do.

Then I got back to Java since you could make your phone do what you want with that but Java is was then? I still retain it though for occasional utiliy, like extrnding Solr or playing with Apache NiFi.

The only language that kind of breaks away from this pattern of necessary imediate empowerment was CoffeeScript. It just exactly mirrors my way of thinking and was just an inch away from pseudo code I used for my notes since primary school.

But then ES6 came and gave me enough CoffeeScript to almost be fine without it. Final nail was TypeScript that gave me stuff I wanted, smart, fast code completion and typechecking for places where I wanted types. Now if I could just have an editor that could display curly braces as indented blocks python and coffee style I'd be perfectly happy with state of browser coding.

I tried Go, Elm, Haskel, Scala but nothing stuck or even went beyond simple programs. Nim was interesting because allowed you to run code at compile time to transform code like Racket macros. Rust so far has the biggest potential because it allows you to have code running concurently without crazy bugs by forcing you to specifically track who owns what and for how long.

That might be useful for me to make programs faster at some point since multicore is now firmly a thing. I love Scheme and Racket. It supports multiple paradigms well but not too bloated as Common Lisp, having a really good optional gradual type system, really easy to use reader macro system, etc. I always believe to achieve real 'domain driven design' is to create a layer of real business language which could interpret to a software system.

However, every time I want to do this I found Clojure is actually a much better choice. I guess to be fully practical is not 1 priority for Racket right now. But I really hope Racket can improve some of the following: 1. Encourage efficient data structures by default. I know lists are the soul of lisp but it's not good to use lists for everything.

Clojure by default let you use highly optimized persistent data structures -- namely vectors and hash maps. These two data structures are highly practical, performant in most of the cases. On the other hand, lists are more like write-heavy data structure, with really bad reading performance. This is like, a plain file system writes faster than databases, but most of the websites use a database because most of the business has much more reads than writes. Clojure has several really solid ClojureScript workflow, which makes me feel ClojureScript is really a first-class citizen.

IDE and debugging. Drracket is really good, but still not great for editing hundreds of files. I guess if the other 3 points are really good there would be many good frameworks come out every day. Immutable hashes are highly impractical when you need regular mutable hashes, which is most of the time. I have to type this as a quick stream right now.

We could consider this an educational opportunity: there are still times when knowing how to do old-school list-processing is exactly what you need, and it's pretty fundamental data structures e.

This turns out to be useful for optimizations, as well as encourage a healthy amount of functional programming. Regarding 2, good point. I raised the WASM issue a couple years ago, and my thinking then and now is to build it for the forthcoming Chez backend, while getting plugged into the WASM standards work in the meantime. Regarding 3, if Geiser works for you, great. If you're up to adding some features you'd like, there's an extension mechanism, and you can also do git pull requests to the DrRacket source itself.

I'm hoping a couple startups use Racket to get to launch, and release the light frameworks that they make along the way. I really recommend that book, not just this section. I really enjoyed it and it was crucial for me in creating a talk I gave after reading it about the fun in creating little languages.

As i understand, the beauty of languages like LISP and Racket, is that, you can easily compose functions. Basically, as i read source code, i'm reading a "programming composition sheet", just like music sheet.

The only annoying thing, is how to reduce brackets and from distracting content from its layout. I am not sure this argument is given.

Immediately, I see there is a mismatch between what I want to program -- an objective, an algorithm, a procedure, a series of side effects that is outside the axioms of the programming language can support -- a mismatch between these objectives and this confinement of everything being an expression.

An expression is a value. So to program in a language where everything is an expression is to map our idea into a list of value. This may be natural for programs that is seeking a value, but often not so.

Even for programs that is seeking a value, the bulk of the program is to control the process of finding this value. There is no easy way or even correct way to map a process and side effects into a value. Math is logic or equivalency. To establish equivalency is to discard the effect of path or side effects. Therefore, to map the desired process and side effects into value, we have to add back the implicit knowledge of how these values are actually transformed.

In stead of directly stating the transformation of values -- an imperative programming style -- we express that with dependency and relying on the understanding how these dependency is being resolved.

The latter is very hard. Of course, in practice, we have to give up on the fine control of our program to some extent and relying on the compiler implementation giving us desired result the path and its side effects , then we only need worry about the value. The argument for everything being an expression is the composability although I thought the reason was ease and flexibility of writing compilers for it.

This is similar to the argument that: if every object is a lego piece, then building something is easy. Well, it depends. First we need accept that lego pieces are all what we have. Second, we have to contend that what lego pieces can build is good for our needs. There are amazing lego projects, but they are nowhere I would find easy.

Well, in languages like Scheme and Lisp, expressions are returning a value. But they can also be control structures. For example IF forms are an expression, but IF is also a control structure: if rocket-engine-running? When expression is not pure value -- they also can be control structures -- the arguments for "everything is an expression" is being defeated, right? A control structure is for defining the path -- control flow. The values in a control structure is a side effect just as the control-flow is a side effect in an expression language.

When control flow is the center of logic, won't a control-flow oriented language -- imperative programming -- be more straight forward? The bug sneaks in due to discrepancy that while the syntax is all about values, the semantics is all about flows. It gives us as a developer the freedom to decide: do I want control flow, return values or both?

Racket is great and fun. I find it truly amazing that after all these many years people are still trying to justify Lisp. Frankly - if it was going to be adopted in mass it would have happened by now and no amount of explaining is going to change that. There are many reasons why languages get adopted and "logic" is not the primary one. Take JS for example - it would have been in the dust bin if not for the fact it is the only language that runs in browsers. I have a hard time believing it doesn't, but I can't find it.

I'm thinking of something like API-Wrap. Riposte doesn't look all that useful outside of a test framework. Nice write-up. Waiting for Racket-on-Chez effort to come with more optimizations and multi-core access. Just to be clear: multi-core already works with racket places basically running a separate racket vm thread :.



0コメント

  • 1000 / 1000