Starting with Newlisp
I decided that in the next few months, and possibly longer I'll use Newlisp programming language. The main reason is that I recognized some important features consistent to the main design goals of Lisp, and they are, I believe, to provide powerful and abstract features which make hard algorithmic problems, especially AI problems easier to solve. In last few decades, it appears that Lisp dialects neglected development of the core language, instead, the efforts seem to be invested mostly in producing implementations more suitable for the practical use. |
Unrestricted Eval.The "eval" function is the cornerstone of the "code=data" paradigm Lisp is known for; if code is treated as data, it means it can be processed and evaluated during runtime. Evaluation of the code is provided through application of the function "eval" on the peace of code. Somehow surprisingly, both Common Lisp and Scheme implement somehow restricted version of eval, i.e. one that does not see "local variables". Newlisp's eval is not restricted on that way.
(let ((a 1)This code evaluates to 3 in Newlisp, while in other Lisp dialects this code produces error "undefined/ unbound variable." Some Lisp users claim that unrestricted eval prevents some optimizations during compiling of the code. Maybe - but claim that some abstract feature should be dismissed because it is slow is strangely radical and unjustified: one can simply use it when time is not critical, and avoid it if it is. I do not think it should be accepted as normal that languages like Python have more powerful "eval" than Common Lisp or Scheme.
First Class Macros.Also, Newlisp has simple, yet powerful macros. In Newlisp, macros are just like functions, except that they accept unevaluated arguments. They are "first class citizens", the values that can be used in expressions and be result of the evaluation of expressions; they can be passed as arguments and returned as results of the functions - or other macros. Newlisp macros are easier to write because they can directly perform given tasks, instead of generating code that will perform that task. As you can see in following example, there is no need for backquotes and commas.
(set 'my-print (lambda-macro(x)Old Lisps, in 1970's had that feature. It was named fexpr but it was dropped in the favor of macros as we know them these days.
Pitman's Criticism of Fexpr's.According to Wikipedia,
This is similar argument as one for restriction of "eval." Not only that dismissing powerful feature because it cannot be optimized is unjustified decision but in this particular case, it is not even true that fexpr's cannot be optimized. If Lisp fexpr's (Newlisp macros) are written in the same style as equivalent Lisp macro's and fexpr calls are inlined, then they provide exactly the same opportunity for optimization as Lisp macro calls do. Common Lisp macro call evaluation is separated in two phases - macroexpansion, and evaluation of the macroexpanded code. Usually, although not always, macroexpansion does not depend on any value known only in runtime, hence macroexpansion can be performed during compile time. But in such cases, parts of the equivalent and inlined fexpr call can be evaluated and optimized during compiling phase on the same way. Newlisp implementation doesn't do it, it is an interpreter - no problem for me, I tend not to care for anything compile time - but such optimizations are, however, possible.
At the 1980 Conference on Lisp and Functional Programming, Kent Pitman presented a paper "Special Forms in Lisp" in which he discussed the advantages and disadvantages of macros and fexprs, and ultimately condemned fexprs. His central objection was that, in a Lisp dialect that allows fexprs, static analysis cannot determine generally whether an operator represents an ordinary function or a fexpr — therefore, static analysis cannot determine whether or not the operands will be evaluated. In particular, the compiler cannot tell whether a subexpression can be safely optimized, since the subexpression might be treated as unevaluated data at run-time."
Another reason Pitman prefers Lisp macros over Lisp fexprs is that he wants some kind of automatic analysis of the code. For example, let's say that one wants to define Lisp macro (USED-ON-PARTICULAR-WAY [variable] [expression]) which returns 1 if, well, if [variable] is used on the particular way in [expression], and 0 if it is not. If [expression] contains macros - these macros can be expanded in USED-ON-PARTICULAR-WAY (using Common Lisp's MACROEXPAND) and analyzed so USED-ON-PARTICULAR-WAY can find the answer it needs. It cannot be done if [expression] contains fexprs instead of macros; fexprs are like black boxes, and USED-ON-PARTICULAR-WAY cannot see what happens inside them. Pitman is partially right. Partially, because it is not excluded that [variable] is, however, used on particular way during macroexpansion phase, and it is not visible in macroexpanded code! Macroexpanded code does not necessarily contain all relevant information. But if we suppose that automatic analysis of macroexpansion phase is not needed, whatever the reason, yes, he is right. However, Newlisp macros solve even that problem, and turn to be even more "transparent" and suitable for the kind of analysis he advocates. Here is why:
Functions and Macros are Their Definitions.In Common Lisp and Scheme, the functions are special type of values resulting from evaluations of their definitions. They can be used as values and applied, but their definitions are not accessible, and they cannot be analysed and mutated during runtime. In Newlisp, functions are not results of evaluation of the definitions, they are exactly same thing as their definitions. Hence, they can be analyzed (and mutated) during runtime. And exactly the same is true for Newlisp macros. The following example demonstrates that feature.
(set 'my-print (lambda-macro(x)So, yes, Newlisp macros and even Newlisp functions can be analyzed by other functions and macros, and that is substantial improvement in the spirit of the "code=data" paradigm.
Real World Problems.Newlisp is a new language, developed by one man, American psychologist and computer scientist Lutz Mueller and it has small users and contributors community so it has less features than some mature Common Lisp and Scheme implementations. However, neither one of the missing features seems to be essential for my programs, and hopefully, if the language itself will be successful, Newlisp libraries and native features will grow. Although we have seen that Newlisp surpasses other Lisp dialects in expressive power of the core language, author says that the language is developed primarily "to be useful for solving real world problems" and that "things like regular expressions, network functions and support for popular protocols like XML or HTTP are all built in to Newlisp without the need of external libraries." Some users of Newlisp say that they switched on Newlisp exactly because of that, not because of abstract features I listed.
Memory Management.My only concern is that Newlisp does not allow that data structures share common parts in memory. Other Lisp users frequently mention that as a weakness, and it seems so on the first sight. However, at this moment, I'm not able to estimate all consequences of that policy, so I rely on expectation that Mueller who was able to single-handedly improve basics of Lisp didn't made significant mistake in this part of the language. I hope that I'll learn how to compensate for that limitation. One possibility might be in exploitation of the facts that data structures can, however, contain same symbols, and symbols can be used as references of data structures.
(set 'x (list 1))I'll certainly come back to that topic in following months.