[Most Recent Entries]
Below are the 4 most recent journal entries recorded in
|Saturday, May 14th, 2011|
|Writer's Block: Tobacco road
Would you want your city to outlaw smoking on public streets? Why or why not?
Like so many other things that we do to excess, tobacco can ruin lives. But so can excessive alcohol, excessive high-calorie foods, excessive weight loss (anorexia), excessive sodium (but apparently for only 10% of the population), excessive online gaming, etc., etc., etc.
I've been known to smoke a cigarette at a party here and there. I've never gotten addicted, so unlike the addicts, I still get a nice buzz from it. I don't do it very often because nicotine is a stimulant that interferes with my concentration and sleep. The effects on me are going to be less than what I get from being around diesel exhaust. It's your 3-pack-a-day smokers who are the ones who are getting lung cancer. Tobacco is highly addictive (for most people, I assume), so people have trouble quitting. Quitting results in stress, headaches, and weight gain. For them, it would have been better to never start in the first place.
But having personal freedom, like we believe we do in the USA, involves having the freedom to make choices that may go wrong. And we just have to learn to control the risks. It's one thing to restrict smoking in enclosed places. Smoking CAN be unhealthy, and so it makes sense to avoid concentrating it. But in the open air? That's absurd. We're harmed more by vehicle exhaust and other pollution than we ever would be from tobacco smoke. Let's outlaw the soot bellowing out of our cars before we outlaw smoking.
Outlawing smoking would just treat a symptom. If people couldn't smoke, they'd abuse nicorette and abuse some other substance. The real problem is that too many Americans lack self-control. Discipline. It's discipline (and ambition) that drives you to accomplish goals. It's also discipline that keeps one from overdoing it with a recreational drug. Or watching too much TV. Or playing too much World of Warcraft. It takes discipline to eat a varied diet with lots of vegetables, but the rewards in terms of well-being are great; too bad so many people can't bring themselves to do it.
My personal struggle with addition is with caffeine. When you step back and look at it, $2.60 x 2 per day for a cup of flavored water from Star Bucks is a bit ridiculous. That's nothing like the expense or plight of a chain smoker, but it's analogous on a small scale. I was driven to have my coffee. The problem was that coffee would give me a major spike of nervous energy (that made it hard to concentrate) followed by a crash (that also made it hard to concentrate). It's also possible that I may have a mild intolerance (like a food allergy, but not deadly) to coffee beans. This interfered with my work. It's been hard, but I've switched to black tea. I don't get the sudden spikes, but I do get some benefits from caffeine, and I don't get the withdrawal symptoms of quitting entirely. Some of you may not think that caffeine addition is in the same class as nicotine addition, but ask anyone who's tried to quit caffeine how hard it is. The headaches are murder.
|Friday, March 30th, 2007|
|What Apple gets right and why Linux keeps slipping behind
I'm a huge Free Software enthusiast. I founded the Open Graphics Project
because I want to promote Free Software. I want people, users of computers, to have freedom and control over their computers. I don't want greedy corporations to limit our freedoms and hold us hostage to their DRM, unreliable licensing mechanisms, and a tendency to leave us holding the bag when they go out of business or decide that we haven't paid them enough money. I believe that if you have a piece of technology in your possession, then you have every right to take it apart, understand it, reverse engineer it, and use it however you like, until you decide you no longer want to. Of course, there are limits. Other people have rights too, and rampant piracy is not a good thing, but that is beyond the scope of this blog entry.
So, I think Free Software should rule the world, and I believe that it is an ethical issue, not merely a practical one. I feel that I am a minor activist in this regard. The more Free Software that people use, the more freedom we'll all have, and I am trying to do my part in my own community project.
At the same time, I'm also an engineer and a scientist. I think about the systems I use, critique them, and consider how they can be improved. As part of my Ph.D. studies at OSU, I'm minoring in Cognitive Engineering. Putting Freedom into people's hands is useless when those people can't figure out how to use what they have. Good design and usability are very important. I haven't paid enough attention to the recent discussions between Linus Torvalds and GNOME developers, so I can't address it directly. But what I can say is that a learning curve is not a bad thing. While it's good to think about the total novice, it's even more important to have consistent and logical mechanisms. This way, if someone has to learn something new to use the computer, they have to learn it only once. This is why I think it's good that Apple and Microsoft have UI development guides that encourage developers to make their apps act consistently with other apps in areas where their functionalities conceptually overlap. And this is where I start to get disappointed with GNU/X11/Linux systems.
This is a blog, not a book, and I have work to do, so you'll forgive me if my points of discussion aren't very well organized. If this were a professional article, I would have researched things more carefully, providing detailed citations. I provided a long intro to set context and explain why I think it's important that Linux systems should make some very radical catch-up changes to the organizations of their systems. What follows are some ideas regarding areas that, once fixed, would bring Linux systems into the 21st centrury and make them compete as well-engineered systems against the likes of Apple. Microsoft has copied a great deal from Apple, but they clearly don't grok the underlying concepts that Apple tries to employ. Alas, I fear that Linux developers also don't grok these ideas either, and also like Microsoft, they're too afraid to change, sticking to out-dated methods in the name of backward-compatibility or inertia or both.Application executables and resources
Apple didn't invent the idea storing all application-related files in one place. I think I read somewhere that an early version of RISCOS did it this way, but Apple has adopted this idea very nicely, and Windows isn't too far behind. The basic idea is that most files for an application are stored in one single place. The application NeoOffice, for instance, has a directory (anywhere you want to put it, really) called NeoOffice.app. Under that directory are all of the binaries and data files associated with the base install of that app. Some structure is imposed so that MacOS knows where to look to find things, but the main idea is that if you want to run or manage this app, there is this one place to look. Installing or uninstalling is a matter of drag-and-drop. Running the application is simple too--double-click on the directory, and the right thing happens, with the folder acting as a representative, to the user, of whatever is in the folder. For the end user, this keeps things painfully simple.
Windows isn't too bad on this front. Under the Program Files directory, you find most of your major apps. The only thing they're missing is the directory-as-proxy mechanism. You have to know a little more in order to find the executable and run it, if you haven't already got a link to it somewhere.
Linux, unfortunately, sticks to its UNIX heritage of spreading things out all over the file system. Executables to in /bin, /sbin, or /usr/bin or /usr/local/bin. Shared objects go into /usr/lib or somewhere under /usr/local. Configuration files (all plaintext; see below) go somewhere under /etc or somewhere under /usr/local. There is a standard that describes in detail how things should be structured, but it's much too brittle. In theory, conventions, even complex ones, can be strictly followed, and everything works out well. But this is an example of a convention that adds just enough complexity and confusion that it doesn't get followed consistently, and the one left holding the bag is the end-user who can't figure out where their files are when something goes wrong. I've been using Linux a lot longer than MacOS, but while I can easily find whatever I want pertaining to an app in MacOS, I am clueless about Linux. And I've even installed a good number of Gentoo systems. The distinction is that the UNIX way is logical but arbitrary. The Apple way, on the other hand, is simply intuitive. That's why it's easier to remember and use.
Here, like for most of this article, I'm going to be pragmatic and suggest that Linux people adopt a Microsoft-like strategy: If you see someone doing something in a way that works better, adopt it. So, the solution for Linux systems is to gradually deprecate all of this /bin, /usr, /etc confusion (except perhaps for the most basic of system tools like find
) and adopt a system that collects all files of each app into its own directory. And this should be done even if there is some redundancy! I think one of the ideas behind the UNIX way is that many apps will share resources. This was a good thing in the 1970's when resources were scarce. Today, however, this sharing often results in version conflicts that break apps and make life hard for users. Think of the users and make things intuitive, even if it results in some minor increase in complexity or redundancy for software developers.Configuration data
Microsoft almost got this right. With the introduction of Windows 95 came the introduction of a centralized registry for all system and application configuration data. It's a hierarchy of key/value pairs accessible from any application. In principle, this is a rather nice way of doing things. The OS provides a simple mechanism for apps to keep settings in a database.
Users of Windows and other systems panned this approach for one good, practical reason: It's unreliable. Although things have improved, registry corruption is still the bane of the Windows user. Install a new app, and it or the OS does something weird, and bang!, the whole registry is hosed, requiring a complete reinstall of the OS. Single points of failure are bad.
Because of a strangely black-and-white sort of thinking, GNU/Linux systems have avoided anything remotely like a registry and stayed stuck in the dark ages with respect to config files. Stored in seemingly random places, every application keeps their own config files in their own special formats, in plain-text ASCII. Despite being littered with comments, most of them, when you can find them, are basically impossible to figure out. Some apps provide GUI tools to edit them, which is great, but they're not always available.
But an even bigger problem, the one thing that made me ditch Gentoo completely and get frequently very annoyed at Ubuntu, is the nightmare of application upgrades. If you have made any changes at all to an application's config data, and then you want to upgrade that app, you have three choices: (1) Keep the old config file, not benefitting from any new options or defaults provided by the upgrade; (2) use the new config file, tossing out all of your changes, requiring you to redo them all; or (3) go through a painful largely-manual merge process, often guessing about the meanings of any new options you find and whether or not your old change is appropriate for the new version. From first-hand experience, I can tell you that this is a very tedious and error-prone process that can leave applications completely broken. For instance, every minor KDE upgrade under Gentoo would leave me finding that some standard KDE app was completely missing, and no one in the Gentoo forums could help me figure out what happened. (And that was after manually sifting through a few hundred config files that were updated.)
This total lack of a standard is a huge embarrassment for Linux.
Apple, as always, struck an elegant balance. In two intuitively predictable locations (one system-wide, the other per-user), you can find a big directory full of property list files. Each .plist file is associated with an application. This approach solves a number of problems. First, an application editing its own private registry won't corrupt anyone else's, because they're all in separate files. Secondly, if you want to find the config settings for an app and edit them directly (does happen now and then), you can isolate them, back them up, delete them, or whatever, without affecting any other apps. Bad things happen in MacOS just like in Windows. But while removing the registry entries for a Windows app can be difficult and dangerous, dragging a .plist file to the trash under MacOS is a simple and safe way of fixing many application problems. On top of all of that, upgrades are always easy, because structured config files are easy to merge. The format of the .plist files is either ASCII XML or a binary encoding of the same data (faster and smaller but not editable in vi).
My suggestion for Linux: Switch to using ASCII XML config files for all apps, highly restricted to store only key/value pairs, and organize them all in a single central location. Provide a shared library that all apps can link to that manage these mini-registries. Make virtual merging of system-wide and per-user config files seamless and automatic in the shared library, often making upgrade merges unnecessary. With this approach, management becomes as simple as it is for MacOS. Upgrade merges become easy, because a generalized system can be used on any config files, regardless of which app they're for. We could also take it one step further and provide meta-data to the generalized merger that tells it how to resolve conflicts between upgrade values and user-supplied changes.Clipboard
X11 basically lacks a clipboard. Instead, the basic X11 cut/paste mechanism is based on a convention of storing information in atoms on the root window. This is another example of a lack of standards. Due to the severe limitations of the basic convention, the desktop managers have devised their own extensions to it in a valiant attempt to add some usability. Unfortunately, they all do if in different, incompatible ways. As a result, GNOME, KDE, and Motif apps don't interoperate well. For instance, one of my favorite editors, Nedit, won't work correctly with the clipboard under KDE or GNOME. The problem here isn't a technical one, however. Like so many other problems with Linux, it's all about ego. Everyone thinks their way is the best and has no interest in adapting to other conventions. The developer of Klipper (KDE's clipboard manager) has even gone so far as to publicly refuse to work with other developers to resolve these problems. He wrote a long article, describing the challenges and his solution (a very sensible one), but then goes on to basically say that it's everyone else's responsibility to adapt to his way. I and others have tried to talk to him about the problems of Motif/Lesstif, but he just blows them off. The fundamental problem is not that he won't work on the other apps (that would be unreasonable, because he's a busy person), but that he won't even talk
to other developers and try to come to a working compromise. GNOME has similar problems. Of course, they're not totally to blame either; I've also filed bug reports on Lesstif, just to be completely ignored. No one wants to work together.
Some links on this topic: Article by the Klipper developer
, Ubuntu bug report
, Nedit list discussion
, Lesstif bug report
, KDE bug report
, Another KDE bug report
.Ego is the problem
Here we are exposing one of the fundamental problems of Linux systems. It's all about ego. Windows users are elitist snobs because they use the dominant platform. Apple users are elitist snobs because they use the "cool" computers. And Linux users are elitist snobs because they use Free Software. What sets Windows and MacOS apart from Linux is that Linux doesn't separate the developer community from the user community. This has obvious advantages. But it also has some major drawbacks. To a noticeable degree, Linux snobbery permeates the development process, while Apple and Microsoft keep a much cleaner separation. Most Linux developers do what they do because they have a passion for it, which is great, but that also means they are sometimes stubborn and unwilling to cooperate with others who think differently. At Microsoft or Apple, a committee decides on APIs and system conventions, and then some coder is paid to implement it that way; as a result, applications see a consistent organization and set of system services. Linux follows a completely distributed development model, where often, it is individual developers who make unilateral decisions about system organization and architecture, quite frequently eschewing established standards (when there are any). I don't know how to fix this problem, but I believe it is a major barrier to Linux's success on the desktop. If it can't be fixed, then Linux and Free Software won't dominate.
This reminds me of a discussion I observed when I was on the Linux Kernel Mailing List. Kernel messages through a user-space daemon to be logged. In general, because message strings take up space, we would prefer to not keep them in the kernel. Instead, if we could number them, we could then have the userspace daemon translate them on their way out to the log file. Many people agreed that this would be a good idea. But it was ultimately rejected. Why? Because there could not possibly be a central authority who could manage the numbering of these messages. Kernel development is just way too distributed. What a pity. As it turned out, however, the messages only took about 6% of the kernel space in the first place! So, really there was no big loss. But this is a good, clean, sad example of the sorts of challenges that Free Software developers face that are not problems for the commercial developers.
Perhaps someone will respond by saying that too many standards restrict creativity. But I agree. If the system architects try to dictate everything down to the finest detail, then you'll end up with a monoculture, and everything will be totally boring. But notice that MacOS isn't boring, despite having standards for things that Linux doesn't even address.
My suggestion to the Linux community: Put aside your egos and work together. Stop all of your egocentric bickering and let reason work out who is right and who is wrong. Develop good standards and stick to them. Of course, no one can develop the perfect system, so you eventually have to declare a version "good enough" and move on. But be willing to make drastic changes when a better idea comes along! Empirically, we can see that Linux does many things in a way that is inferior to other systems; adopt those ideas quickly. And most importantly, put some thought into how the end user (novice and power user) will interact with the system, at every level. Taking a lesson from Donald Norman, we should strive to put more knowledge into the world and reduce how much knowledge must be kept in our heads. If you have to make a choice, favor intuitive options over ones that require learning, and when something does have to be learned, be sure that your new convention applies as universally as possible, minimizing what has to be learned in the future.Conclusion
One of the things I'm implying here is that the Linux community should form standards committees. I've been indirectly involved in standards committees, such as the VESA DPVL committee, and it can be a painful, glacial process. But this pain is necessary if we're to come together and develop elegant systems that work well in the enterprise and on the desktop. Indeed, there are already many Free Software community organizations that develop standards. DRI is one such example, providing a rendering infrastructure that fits into Linux, *BSD, and Solaris. Similarly, my friends with the Open Hardware Foundation
are working hard to bring together hardware vendors and projects to develop hardware standards for the good of Free Software. It's been pointed out that Linux isn't any more of a bazaar than Microsoft is a cathedral, so let's not be afraid to develop hierarchy, particularly ones that make it easier for us to nail down what would otherwise be arbitrary choices anyhow. Linux distros and community projects can add representatives to committees that decide things that would constitute sweeping changes to the way we do things. Without a democratic body to do this, it may be impossible for us to identify, decide, and make the changes that we really need to make to keep up.Update
[10:07pm] A very kind commenter was nice enough to point out GoboLinux
. This takes the NeXT/MacOS approach even farther and packages even basic system tools, like the BASH shell, in their own application directories. Also check out the Wikipedia article on GoboLinux
for an overview. I'm downloading it right now, and perhaps in a later blog post, I'll mention some of my impressions. Just because the files are organized better doesn't mean it's got the same surface polish as, say, Ubuntu which is great to use unless
you have to tweak the underlying system directly. So we'll see. Thanks!
[10:48pm] Well, I downloaded the GoboLinux ISO and tried to install it under Parallels. Parallels instantly pops up a bug report dialog, telling me that there's been a fatal error in the VM. So I submit the bug report. Speaking of Parallels, I've tried emailing them on a number of occasions regarding technical problems, only to be completely ignored. One time, I emailed them just to ask if they actually ever respond to customer emails. They ignored that too. The email is getting through, because I do get the automated response. It seems that Parallels are not too interested in customer service. Especially considering that they charge you the full price every time you upgrade, I have no incentive not to consider something like VMware instead next time. It's probably rude, unprofessional, and inappropriate for me to make a comment like this in this context, but it really makes me angry when I pay $80 for a piece of software, and the vendor treats me like I don't exist. It's one thing if you don't get support from projects done by unpaid volunteers, but it's entirely another when you pony up money for a broken piece of closed-source software.
[11:06pm, March 31] Still haven't tried GoboLinux. I'll have to try a live CD on real hardware some time soon. Some people mentioned also "klik", which I think is also a brilliant solution. It looks to me like I am not suggesting anything that hasn't already been tried. Well, it was already tried on MacOS and whatever it inherited from, but what I mean is that there are Linux people out there who are actively working on these ideas too. Some combination of GoboLinux and klik, I think, is exactly what we need to get out from under some of the architectural cruft that holds Linux back and start becoming competitive. Ubuntu does a great job of putting a nice face over a disorganized mess, but that's all it is -- a bandaid. We need to design systems to be intuitive from the ground up, not paste make up on top of blemishes. Once all of that is fixed, we can start looking at what another commenter mentioned: How the Mac does code reuse brilliantly.
|Sunday, February 5th, 2006|
|Why I hate OpenOffice.org 2.0
Let me begin by saying that I don't prefer Microsoft Office. I'm complaining because I REALLY REALLY want OOo.org to be the absolute ass-kickingest suite you can get, but as far as I'm concerned, it's only 50% the way there. Oh, it's got all the FEATURES; what it lacks is polish. There are aspects of the UI that are just brain-dead, because no one bothered to think about it. Some of my complaints have to do with usability and how inconvenient it is to do certain common things.
Even before starting this rant, I've considering looking at some alternatives like KOffice, but I have my suspicions that the problems I have with OpenOffice are generally the result of how _I_ need to use it. I'm a Ph.D. student (doing Masters courses right now) in CS, and taking advanced CS and math courses, I spend a lot of time entering formulas, doing superscripts and subscripts, and inserting special symbols. Most users, I suspect, write short documents with everything in the same font, perhaps some bullets now and then, and maybe bold on a rainy tuesday. Letters to grandma (who doesn't have email, which is why you're using the word processor) aren't going to be particularly taxing on any word processor.
My first complaint is that OOo is a horrible memory hog. What's worse is that the developers don't seem to do basic things that would alleviate that problem. My typical problem scenario stems from using both OOo and Firefox at the same time. They are the worst offenders. What will happen after about 3 days of using them both is that my 1 gig of RAM will over-fill. Swapping kicks in and works okay for a little while, giving me little warning until SUDDENLY, the system becomes unusable. I think that point is about when there's about 2 gig of active memory. The thrashing gets to the point that switching applications takes minutes.
My instinct is to start saving documents and closing documents that I don't really need to have open. But guess what. That doesn't help one bit. I start out with 10 documents open, and OOo is literally using 1 gig of memory. I close all but one. OOo is STILL using 1 gig. I wait. It's still using 1 gig. How can you close all but one document and still have it hog that much memory?
I know why, actually. Linux and most UNIX variants allocate memory from the kernel using the brk system call. Most programmers are familiar with malloc(), but that's just a libc library that does memory management in your process space. When you allocate memory, and there isn't enough allocated to your process, malloc calls brk, which gets the kernel to add more pages to the code/data segment. When a process calls free(), to return memory to the system, it doesn't necessarily get returned to the kernel. The kernel is only managing one logically continuous chunk of memory. If what you free is the LAST thing in memory, it can be returned to the kernel. If not, there's nothing you can do about it. Only that process can reuse that memory (when it does another malloc).
The problem begins with the fact that OOo keeps EVERYTHING within one single process. Actually, it is multiple threads, but all the threads share the same address space. I'm working along on the last document that I created, so it's at the end of the address space. When the thrashing begins, I start closing documents that are at lower addresses. OOo can't return those pages to the kernel, because they're not at the end of the address space.
Now, you'd think that the thrashing would stop. After those pages are freed, they're not needed any more. They should get swapped out and and stay in the swap partition until the process allocates memory again. Instead, the thrashing just keeps going. Why? My guess is that not all of it is really free; both OOo and malloc have data structures hanging around in those pages, so they still get touched a lot.
How can OOo developers solve this? Simple: Put each document into its own independent process.
I understand the advantages of using a single multi-threaded process; among other things, it helps unify the user interface, and it helps deal with objects that are embedded or linked across documents. But in my opinion, the costs are not worth the benefits. OOo reminds me of using Windows 98. You have to "reboot" it every couple of days. And restartin OOo takes almost as long because of all the thrashing.
Of course, this is really only a symptom of the fact that OOo is just a terrible memory pig. A study was done that showed 2.0 taking up more memory per document than either earlier versions of OOo or Microsoft Office. The MSOffice numbers may have been misleading (all the DLLs), but I've NEVER had MSOffice bring a system to its knees like OOo 2.0 does. This is TERRIBLE. And as far as I can find, there is no high-priority effort going on to fix this.
Next, OOo is terribly crash-prone. The first day I started using 2.0, I had it crash three times within an hour. I don't know why it got better after that. Perhaps when it imports a .sxw file, something funky happens, but if you're using native open-doc files, it works better. OOo, unlike man crash-prone programs like Konqueror and Firefox, actually has a crash-recovery facility. Too bad it doesn't work very well. Invariably, what it "recovers" is the document as it was the last time I manually saved it. Is there an auto-save feature I have to enable? Why isn't it on by default. And of course, it tells you that you can send a crash report but doesn't actually give you the button it tells you to click. *sigh*
Editing is not well thought-out. Check out this bug report I filed:http://qa.openoffice.org/issues/show_bug.cgi?id=60781
This, in my opinion, is a defect. No, it doesn't crash. No, it's not impossible to work around. But it is a flaw in the way the program that, if fixed, would improve usability significantly. At least for me and every other scientist and engineer out there.
Oh, and isn't that an irony? One of the biggest complains about Open Source software is that it's developed by geeks that only make it friendly for THEM to use. So you'd think that those very same people would have noticed this problem in this open source project and fixed it. Yeah, right.
Here's another bug. I've been complaining about this for YEARS, and they just brush it off:http://qa.openoffice.org/issues/show_bug.cgi?id=50019
Ever since I started using spreadsheets professionally, in like 1992, I have tended to do a lot of decorative things to them. Often, it's not just a matter of doing some simple math and presenting it. I'm often wanting to make a PRESENTATION, that just happens to be grid-like and often has numbers. I've even had a tendency to use a spreadsheet as a layout tool. Since makers of those sticky labels for your printer don't support OOo (and often provide the templates as a self-extracting Windows binary), I have to do it myself. Carefully tweaking margins and cell sizes, a spreadsheet is wonderful for label sheets.
Anyhow, a VERY frequent operation for me to do is to do a multiple discontinuous selection and then apply the same formatting to all cells. Really, you wouldn't believe how often I want to do this. But you'll notice that the developers didn't really listen to me. I apply borders just as often as anything else. Just because you can apply colors to a multiple selection doesn't mean you're done and can close the bug.
Usually, the problems are just minor annoyances that become big problems only because I do them so often. For instance, when you use the formula creator, it insists on putting 0.2 inches on the left and right as padding. You can change that, but there doesn't seem to be a way to set the default, and even when you set the padding to zero, it's still not really zero.
This one's just inconsistent and unattractive: I think when you have a 12-point font, that means that the height of the character is 12 points. Two different fonts at 12 points should be the same hight, right? Well, OpenSymbol, at 12 points, is noticably taller than other fonts. I'm not 100% sure that OpenSymbol comes with OOo, however.
Another complaint that I have about OOo is actually their QA web site. Although I have done my fair share, I don't like to make duplicate bug reports. But their query tool is a pain in the ass. I like how bugs.gentoo.org lets you "search" for keywords, and it finds what you're after. OOo's site makes you specify whether your search string is in the body or the description or a number of other things. Why not "all of the above"?
Another complaint is about documentation. Maybe I'm just an idiot, but I have absolutely no luck searching openoffice.org, googling, or using the built-in help when trying to find solutions to OOo problems. In the bug report I filed below, they were nice enough to tell me how to deal with the problem, but what bugs me is that I would never have known if they hadn't told me.http://qa.openoffice.org/issues/show_bug.cgi?id=60663
Ok, I need to close the rant for today. I've actually filed a considerable number of bug reports, and sometimes, they even listen to me. That's encouraging. But sometimes, I'm just too busy. I've got to get work done. My list of complaints is short. Maybe I'll add comments as I encounter more problems in the future. I've forgotten most of the ones I've faced.
|Thursday, February 2nd, 2006|
|Why I hate LISP.
Let me begin by talking about something seemingly unrelated so I can make an analogy.
Consider VMS. In VMS, there are files, and there are devices. They are treated as two different things, and, unless I'm mistaken, you have to use different system calls to access each type of resource. Even if I'm mistaken about this, people who know about older operating systems can think of at least one for which this is a true statement. The main point is that files and devices are in different namespaces.
Then along came UNIX. In UNIX, I/O devices were put into the filesystem as device nodes. Then, you could use the same system calls to access both files and devices, allowing you to use the same tools seamlessly on both. The result of unifying those namespaces was a more powerful system. Unfortunately, there are still some things that you have to access using different system calls. For instance, to get metadata about a file, such as its length.
Then there are systems, like Plan 9 and Reiser4, that take this a step further and make all sort of things, like file metadata, available via the basic open/read/write system calls. So, if you have the build flags right, and you have a file named MYFILE, then Reiser4 will allow you to get the file's length by accessing a virtual file called MYFILE/..size (or something like that). My moving more and more things into the same namespace, the system's power is increased, because, again, you can use the same file-oriented tools to do more things.
Now, let's consider programming languages. The C programming language expresses functions and data structures using forms that, while similar, are incompatible. You can't take a function definition and just treat it as data. One of the original ideas behind LISP was to express mathematical expressions, functions, and data all in the same format. This unification of language spaces made LISP all that much more powerful. First, the language and the interpreter are simplified, because you need only one simple parser to handle both program structures, data structures, and data. Even when executing code, a function body is processed using the same mechanisms that are used to parse lists of data that you want to process. And finally, because you could treat a function as data, or you could programmatically construct a function and then execute it.
Now we get to the first part about LISP that I hate.
Let's begin by considering LISP's handling of variables. Say you define a global constant like this:
> (define F ((x y) z))
And then in the interpreter, you type, F, it'll print out what F is defined to be, which is:
> ((x y) z)
Makes sense, right?
Then let's say you create a function in LISP like this:
> (defun myfunc (param1 param2) function-body)
You'd think that since a function is just data, the hype that surrounds this language, that you'd be able to just type myfunc into the interpreter and get something like this to print out:
> ((param1 param2) function-body)
But you can't.
And that pisses me off. It pisses me off because that's an inconsistency. All of a sudden, a function isn't data anymore, and there's no good reason for it. When being introduced to LISP, students have this grand unifying principle (everything is in the same format) explained to them. And then that principle is violated by functions not being treated as data.
Someone might come along and point out that this is the case for other languages. In fact, many other languages are strongly typed. Integers, reals, data structures, functions, etc. are all different, incompatible types that are checked at run-time. Maybe, you'd say, that this is an example of typing in LISP. The problem is that LISP is not a strongly typed language. The parameters to a function are not given types, so regardless of what a function expects, whatever you pass in as arguments (lists, integers, strings, symbols) are bound to the parameters and that's what the function gets. So if a string or an integer can be bound to a variable, and you can print that out, etc., why can't you do the same with a function?
Don't tell me about LAMBDA. Lambda is a way to construct a nameless function, and you can assign that to a variable. In addition, there's another primitive called APPLY that lets you treat an arbitrary list as a function body and apply that to arguments. So, in LISP, you CAN treat a function as data. It's just unnecessarily clunky.
For a language to have gone so far to unify the language space of functions and data, why fall so short of the goal and make something obvious like this so inconvenient? There's no good reason for that.
Now, there is Scheme. While the syntax for functions definitions does still bug me a bit, it DOES great functions like first-class data types, fixing this major beef of mine with LISP.
Here's another thing that pisses me off about LISP. Check out this page:
Scroll down until you see a looping construct like this:
> (loop for idx from 0 upto length by 5 ...
Now, I have absolutely no problem with a functional language having looping constructs. Recursion is not the best solution for everything. But this is another thing that violates everything I learned as an undergrad about basic LISP syntax. The problem is that "for", "from", "upto", and "by" SHOULD be treated by the parser as symbols and before evaluating that expression, it should look for variables by those names, find that there are no such bindings, and bail out with an error.
Now, LISP cannot go entirely without "special forms". For instance, (QUOTE (x y z)) is a special form that does NOT evaluate its argument and simply yields the list (x y z). You see, without it, the form (x y z) would be interpreted as a call to function x with arguments y and z. In order to simply express the list (x y z), you have to quote it.
But this LOOP construct is just taking it too far. I know about macros too. I know that you can create forms like this and make them work. It just isn't in the spirit of the language. For one thing, typically when you have a symbol that you don't want to be looked up as a variable, you quote it, using things like 'for, 'from, etc. Secondly, these extra keywords are unnecessary redundancies. LISP doesn't use any special extra syntax for function definitions, so why should it for loops?
How about a form like this:
> (for-loop ('idx 0 length 5) ...
Since this would probably be implemented by a macro, quoting idx is probably unnecessary. But what we have here is something that is more in the spirit of LISP syntax. No extra keywords. The for-loop construct takes, as its first argument, a list containing the loop iteration variable, the start value, the ending value, and an optional step. Its remaining arguments are the loop body. Hey, doesn't this remind you of DEFUN?
The philosophy behind LISP, according to the hype, doesn't define itself around syntactic sugar. Function definitions don't have elaborate statements of parameter types, variable declarations, begins, ends, etc. Just lots of parentheses. No, lisp does things like this, which are simple and economical:
> (let (a expr1) (b expr2) code)
There's no silliness with assignment statements of operators or anything like that. a gets assigned the value of expr1, b gets assigned the value of expr2, and then the code can use those variables. Simple. So why all the syntactic sugar with for, from, upto, and by? Don't need them!
I'm not absolutely sure, but I think the original LISP did not have that looping construct. So, it must have been added later by someone who didn't quite get it. And that's one of the problems with LISP. What we call LISP today is really a family of languages, all of which differ from each other in some way, and all of which have added syntactic kruft that was created by people who were used to using procedural languages. They didn't grok LISP, so they added junk that made LISP inconsistent.
Another thing that bothers me about LISP is the documentation. I have read books on LISP, and I have read web pages on LISP, and I hate them all. Every damn one of them seems to be written by someone who doesn't know how to write complete, sensible documentation. In variably, they'll make forward reference to something that they haven't covered yet, or they'll introduce syntactic forms that they NEVER explain or any number of other things. One page I was reading defined SETF, which was nice. But it also used SETQ in a good number of examples without ever explaining what it does. Isn't that nice?
Or how about this form:
I THINK that the # means it's a char type, and I THINK that the ' MAY mean the same thing as when it's used to quote a list. But I've never been able to find an explanation of it. They just use it and hope you figure it out. And if it's a quoted char constant, why isn't it '#c
Anyhow, I invite people who understand this to set me straight, point me to documentation, etc. I'm complaining out of ignorance. Back in the mid 90's when I was an undergrad, I took a course on LISP and that's when I learned it. Since then, I have fiddled with it a few times. Now that I'm working towards my Ph.D., I'm taking an advanced programming languages course where we're asked to write a LISP interpreter as project. We are only being asked to interpret a small subset of the language. But now that I'm faced with it, I want to make SENSE out of it. I want to understand the philosophy behind it, and I can't because the language seems to be inconsistent with its own philosophy.