February 29, 2012

Hiring the Wrong People

Several years ago, this famous rant by Bill Gates (here's a more readable version) made its way around the internet. What was remarkable about it was that Bill Gates himself flaming his underlings for many of the things everyone else was complaining about did almost nothing to improve Windows. What improvements were eventually made are usually attributed to many outside influences, especially Apple and the rise of tablet PCs.

This really isn't that remarkable to anyone inside the software industry. I've been an intern at Microsoft and I've seen all sorts of hilariously misconstrued practices and rules and regulations meant to bend employees into some magical definition of an awesome developer that is usually, itself, hopelessly misguided. Almost the entire modern business world runs on the idea that you hire someone and then tell them how to behave. It's an authoritarian mindset derived from the idea that employees should be obedient little robots. You hire managers to manage the employees, and when you have too many managers you hire department heads to manage the managers, and so on.

This is not how you manage creative people. Creative people should not be told what to do, only be given a general direction to walk in. You should hire a creative person because you want him to do what he was doing anyway. If your operating system is bloated, then hire someone who thinks that it shouldn't be bloated. You don't have to TELL that employee to make the operating system less bloated because they want it to be less bloated, so they'll just make it less bloated. If you don't trust them to do a good job making the operating system less bloated you shouldn't hire them.

In this context, the entire modern concept of a CEO driving the direction of a corporation breaks down. You do not drive the direction of a company by telling employees what to think. You drive the direction of a company by hiring people who think in the direction you want to go. Creative people work best when you simply let them do their jobs.

If you hire a web designer, you don't tell them what to do. You tell them what the end goal is, what the restrictions are, and let them figure out the best way to do it. The entire concept of a CEO going through potential website designs until he finds one he "likes" is utterly ridiculous since you just hired them because they know that! A situation where a manager can do an employee's job better than they can is extremely rare, because the people who tend to be the best at doing something are the people who are actually doing it. Just let them do the job you hired them to do.

If you hire a programmer to write a graphics engine, you don't tell them what order to implement features in. You don't even tell them what features you want them to implement, because someone else on your artistic team is supposed to know which features are needed, and the programmer is expected to prioritize those features properly because that's what you freaking hired them for.

If I hire an artist/designer to make a game, I expect them to go and make the game. I give them certain limitations, like budget concerns or maximum development time, but I expect them to be able to work within these restrictions or I wouldn't have hired them in the first place. If they don't, I don't hire a manager to make sure they do, I fire them (or move them somewhere else) because they aren't doing their job. The only time you need a manager is when there is a conflict of interest in the workplace or someone is actually doing something wrong. If most modern software companies have managers managing people day in and day out, then apparently those companies think that an employee is doing something wrong every single day. If an employee is doing something wrong every single day, why did you hire them?

Even worse is that software companies do not consider themselves bastions of creativity. This is a problem, because software companies benefit greatly from this, since programmers tend to be naturally self-driven. A team working on a project simply needs a designated team leader to keep things focused, not a full-time manager, and the team leader definitely shouldn't be spending his entire day attending meetings (*cough*Microsoft*cough*). It's the team's project. They decide where it should go, and if this is a problem then you shouldn't have hired them. Google is an example of a software company leaning in this direction.

Creative companies need to trust their employees. Managers exist because employers don't trust employees. Software companies don't realize they're actually creative companies, and their employees hate this because the managers usually have no bloody idea what they're talking about, and we see examples of this over and over and over in the news and laugh at them. But we never actually look at it as a problem that needs to be fixed, only as an inevitability.

We can fix this. In fact, it was fixed almost a decade ago. Valve has no managers. There's just Gabe Newell and then everyone else. It works. If you happen to like Portal, I think you'd agree that it works really damn well. There is a significant chunk of the workforce that is not being utilized. The companies that do manage to utilize it are the ones that write the history books.

February 27, 2012

Chill Out

I do not usually involve myself with politics and bureaucracies, but they have a way of creeping into every facet of human experience in an almost parasitic manner. Even in my own profession of software engineering I have to deal with pointless language wars, followed by people being pissed off that I called them pointless. Well then, let me be perfectly clear here: Language wars are pointless. You know what else is pointless? Racism is pointless, sexism is pointless, rape culture is pointless, homophobia is pointless, religious dogma is pointless, and pretty much every single topic in modern politics is pointless, either by virtue of simply being ridiculous or by completely missing the point.

What bothers me about homophobia is twofold. One, obviously, is that it involves a religious person telling two people they can't fall in love with each other because whatever god they happen to worship says so, as if they actually have the authority to tell two adults what to do. If those two adults choose to go to hell, your own goddamn bible says you're supposed to be nice to them anyway. But what's really disturbing is the fact that anyone even gives a shit about any of this. A dude wants to marry a dude, and this supposedly destroys the sanctity of marriage. I think you people are a tad bit sheltered, because there are teenage girls being sold into slavery, in a sex-trafficking ring, in America. I think, just maybe, you need to chill out, because being sold into slavery seems a lot worse than two guys kissing.

Then we have people who don't believe in woman's rights, or are still opposed to inter-racial marriage, as accounted by Kevin Roose in his book "The Unlikely Disciple". These people actually care about trying to restrict woman from working or stopping inter-racial marriage. We have people campaigning against contraception and abortions at the same time without thinking about the difference between being right and being pragmatic. What about kids dying of thirst in Africa? What about the tyrannical despots still in control of various middle-eastern nations? What about China's one-child policy? What about malaria epidemics in South America?

What happened to all the hard problems?

Why would anyone even care about burning a book? And why would some people proceed to laugh at the Taliban for doing that and then get pissed off about gay marriage? Do they simply not understand what being a hypocrite means? Why is it that the word "sheeple", once used only by conspiracy theorist lunatics, now seems to describe Americans with disturbing accuracy? Has Facebook so changed our lives that we are now too incensed about a bunch of completely irrelevant topics that we forget about doing anything that actually has meaning? Have we ever actually cared about things that really matter?

In the late 1960s, America had a purpose (so did Russia). It is sad that we apparently require a war that threatens our livelihood to get us to think logically for 5 seconds, and even then this was in the middle of the civil rights movement - so even in our one moment of glory, we were still being assholes. Now America is so powerful it faces no existential threat, and has predictably devolved into a bunch of bickering toddlers (who are still assholes).

The future has not turned into 1984, nor does it involve flying cars. Instead, it is a strange amalgamation of realities, fused into a giant mess we call modern life. Facebook is still dependent on a 40 year old internet backbone. 95% of all modern software is crap and inherently dependent on code written several decades ago. We haven't solved world hunger, or moved to electronic credits, or colonized the moon, or hell even gotten back to the moon at all. We're too busy arguing about celebrities and gay marriage and religious books and a whole bunch of meaningless crap. We have failed to give our kids any sort of perspective, or priority. People do not understand the importance of compromise and instead criticize our president for trying to get things done. We seriously complain about our president doing exactly what politicians are supposed to do - compromise! We argue about teaching evolution in school, we argue about being forced to give girls Plan B!

CHILL OUT!

If a little girl was dying of thirst, would you ask her if she was a lesbian before giving her a drink?

One would hope that, in a distant future, while humanity will inevitably still bicker about silly things, we would at least know that ensuring reasonable quality of life for fellow sentient beings and trying to make our world (or solar system, at that point) a better place trumps silly arguments like why people are still using C++, or whose god has a bigger dick.

I'm not very optimistic.

February 24, 2012

Implicit UI Design

For a long time, I have been frustrated with the poor UI design that is rampant in the software industry. As a consequence, many Linux enthusiasts have pointed out how productive you can be with Emacs, VIM, and other keyboard-shortcut/terminal oriented software. The UI design has gotten so bad, I have to agree that in comparison to recent user interface designs, keyboard shortcuts are looking rather appealing. This, however, doesn't mean that one approach is inherently better than another, simply that modern user interfaces suck.

In this blog, I'm going to outline several improvements to user interfaces and the generalized alternative design philosophy that is behind many of them. To start, let's look at this recent article about Visual Studio 11, which details Microsoft's latest strategy to ruin their products by sucking all the color out of them:

Visual Studio 2010 Visual Studio 2010

Visual Studio 11 Visual Studio 11

See, color is kind of important. Notably, color is how I find icons in your toolbar. To me, color is a kind of filter. If I know what color(s) are most prominent in a given icon, I can mentally filter out everything else and only have to browse through a much smaller subset of your icons. Combined with having a vague notion of where a given icon is, this usually reduces the number of icons I have to mentally sift through to only one or two.
Color Filtering Mental color and spatial filtering

If you remove color, I am left to only my spatial elimination, which can make things extremely annoying when there are a bunch of icons right next to each other. Microsoft claims to have done a study that shows that this new style does not harm being able to identify a given icon in terms of speed, but fails to take into account the mental tax that goes into finding an icon without color.
While we understand that opinions on this new style of iconography may vary, an icon recognition study conducted with 76 participants, 40 existing and 36 new VS users, showed no negative effect in icon recognition rates for either group due to the glyph style transition. In the case of the new VS users they were able to successfully identify the new VS 11 icons faster than the VS 2010 icons (i.e., given a command, users were faster at pointing to the icon representing that command).
You may still be able to find the icon, especially after you've been forced to memorize its position just to find it again, and do it reasonably fast, but whether you like it or not, the absence of color will force your brain to process more possible icons before settling down on the one you actually want. When I'm compiling something, I only have a vague notion of where the start button actually is. I don't need to know exactly where it is or even what it looks like; it's extremely easy to find since its practically the only green button on the entire toolbar. Same goes for save and open. The uniform color makes the icons very easy to spot, and all the other icons are immediately discarded.

That said, there are many poorly design icons in Visual Studio. Take this group of icons: Bad Icons!
I don't really know what any of those buttons do. One of them is some sort of toolbox and one is probably the properties window, but they have too many colors. It becomes, again, mentally taxing when even looking at those icons because there is too much irrelevant information being processed. In this scenario, the colors are detrimental to the identification of the buttons. This is because there is no single dominant color, like on the play button, or the save button. Hence, the best solution is to design icons with principle colors, such that some or all the icon is one color and the rest is greyscale. The brain edits out the greyscale and allows identification by color, followed by location, followed by actual glyph shape. To avoid overloading the user with a rainbow of colors, consider scaling the amount of color to how likely a given button is to be used. Desaturate or shrink the colored area of buttons that are much less important. Color is a tool that can be used correctly or incorrectly - that doesn't mean you should just throw it away the first time you screw up.

We can make better decisions by taking into account how the user is going to use the GUI. By developing an awareness of how a user interface is normally used, we develop vastly superior interactions that accelerate, rather than impede, workflow. To return to visual studio, let's take a look at a feature in the VS11: pinning a variable preview.
Variable Preview Pinned Variable Preview
This is a terrible implementation for a number of reasons. First, since the pin button is all the way on the other end, it is a moving target and you'll never really be sure where it is until you need to pin something. Furthermore, you can drag the pinned variable around, and you'll want to after Visual Studio moves it to a seemingly random location that is almost always annoying (but only after the entire IDE locks up for 3 seconds because you haven't done it recently). When would a user be dragging a variable around? Only when its pinned. A better implementation is to make a handle on the left side of any variable preview. If you click the handle (and optionally drag the variable around), it is implicitly converted to a pinned variable without changing anything else, and a close button appears to the left of the handle.
Better Variable Preview Better Pinned Preview
This is much easier to use, because it eliminates a mouse click and prevents the variable from moving to some random location you must then locate afterwards to move it to your actual desired location. By shifting the close button to the left, it is no longer a moving target. To make this even better, you should make sure previews snap to each other so you can quickly build a list of them, and probably include a menu dropdown by the close button too.

We have just used Implicit UI Design, where instead of forcing the user to explicitly specify what they want to have happen, we can use contextual clues to imply a given action. We knew that the user could not possibly move a variable preview without wanting to pin it, so we simply made the act of moving the preview, pin it. Another example is docking windows. Both Adobe and Visual Studio are guilty of trying to make everything dockable everywhere without realizing that this is usually just extremely annoying, not helpful. I mean really, why would I want to dock the find window?
Bad Find Window!
Goddamn it, not again

Even if I was doing a lot of find and replace, it just isn't useful. You can usually cover up half your code while finding and replacing without too much hassle. The only thing this does is make it really hard to move the damn window anywhere because if you aren't careful you'll accidentally dock it to the toolbar and then you have to pull the damn thing out and hope nothing else blew up, and if your really unlucky you'll have to reconstruct the whole freaking solution window.

That isn't helpful. The fact that the act of docking and undocking can be excruciatingly slow makes things even worse and is inexcusable. Only a UI that is bloated beyond my ability to comprehend could possibly have such a difficult time docking and undocking things, no doubt made worse by their fetishization of docking windows. Docking windows correctly requires that you account for the extremely common mistake of accidentally undocking or docking a window where you didn't want it. If a mistake is so common and so annoying, you should make it either much harder to do, or make it very easy to undo. In this case, you should remember where the window was docked last (or not docked), and make a space for it to be dropped into, instead of forcing the user to figure out which magic location on the screen actually docks the window to the right position (which sometimes involves differences of 2 or 3 pixels, which is incredibly absurd).
Spot Holding
Spot Holding

Panels are related to docking (usually you can dock something into a collapsible panel), but even these aren't done very efficiently. If you somehow manage to get the window you want docked into a panel, it defaults to pinning the entire group of panels, regardless of whether they were pinned before or not. I just wanted to pin one!

You have to drag the thing out and redock it to pin just that one panel.

There is a better way to do this. If we click on a collapsible panel, we know the user wants to show it. However, the only reason they even need to click on the panel is because we don't show it immediately after the mouse hovers over the button. This time should be less than a tenth of a second, and it should immediately close if the mouse gets too far away. It should stay open if the mouse is close enough that the user might have temporarily left the window but may want to come back in. Hovering over another panel button immediately replaces the current panel (in this case, at least), and dragging the panel title bar or the panel button lets you dock or undock.

Now the user will never need to click the panel to make it show up, so we can make that operation do something else. Why not make clicking a panel pin it open? And don't do any of that "pin the entire bunch of panels" crap either, just pin that one panel and have it so the other panels can still pop up over it. Then, if you click the panel button again, it's unpinned. This is so much better than the clunky UI interfaces we have right now, and we did it by thinking about Implicit UI Design. By making the mouse click redundant, we could take that to imply that the user wants the panel to be pinned. Moving the mouse far away from the panel implies that the panel is no longer useful. To make sure a mistake is easy to correct, pinning a panel should be identical to simply having it be hovered over indefinitely, and should not change the surrounding UI in any way. Then a mistake can simply be undone by clicking the panel button again, which is a much larger target than a tiny little pin icon. Combine this with our improved docking above, so that a mistakenly undocked panel, when clicked and dragged again, has its old spot ready and waiting in case you want to undo your mistake.
Panel holding
It's 2012. I think its high time our user interfaces reflected that.

February 19, 2012

Linux Mint 12 KDE

Over the course of 3 hours spent trying to figure out why my Linux Mint 12 KDE installation would to go to a permanent black screen on boot, I managed to overheat part of my computer (at least that is the only thing that could explain this) to the point where it'd lock up on the POST and had to give up until this morning, where I managed to figure out that I could delete the xorg.conf file in Mint to force it to go to default settings. This finally got Mint 12 to show up, and I later confirmed that the nvidia drivers were broken. I then discovered that the nvidia drivers in the distribution for apt-get sources are almost 120 versions behind the current release (295), but the installer for that kept failing despite my attempts to fix it and having to add source repositories to apt-get because apparently these are disabled by default, which confused me greatly for a short period whilst trying to install the Linux source. This ultimately proved futile since Nvidia can't be bothered to make anything remotely easy for Linux, so I'm stuck with a half-broken default driver that can't use my second monitor with a mouse cursor that repeatedly blinks out of existence in a very aggravating manner.

Of course, the only reason I even have Linux installed is to compile things on Linux and make sure they work, so as long as my development IDE works, I should be fine! Naturally, it doesn't. Kdevelop4 is the least insultingly bad coding IDE available for Linux that isn't VIM or Emacs, which both follow in the tradition of being so amazingly configurable you'll spent half your development cycle trying to get them to work work and then learning all the arcane commands they use in order to have some semblance of productivity. Like most bizarre, functionality-oriented Linux tools, after about 6 months of torturing yourself with them, you'll probably be significantly more productive than someone on Visual Studio. Sadly, the majority of my time is not spent coding, it's spent sitting in front of a computer screen for hours trying to figure out how to get 50 lines of code to work properly. Saying that your text editor lets you be super productive assumes you are typing something all the time, and if you are doing that you are a codemonkey, not a programmer. Hence, I really don't give a fuck about how efficient a given text editor is so long as it has a couple commands I find useful in it. What's more important is that it works. KDevelop4 looked like it might actually do this, but sadly it can't find any include files. It also can't compile anything that isn't C++ because it builds everything with CMake and refuses to properly compile a C file. It has a bunch of hilariously bad user interface design choices, and basically just sucks.

So now i'm back to the command line, editing my code in Kate, the default text editor for Mint 12, forced to compile my code with GCC from the terminal. This, of course, only works when I have a single source file. Now I need to learn how to write makefiles, which are notoriously confusing and ugly and require me to do all sorts of weird things and hope GCC's -M option actually generates the right rule because the compiler itself is too stupid to figure out dependencies, but I have no IDE to tell it what to do. Then I have to link everything, and then I have to debug my program from the terminal using command line gdb, which is one of the most incredibly painful experiences I have ever had. Meanwhile, every single user experience in Linux is still terribly designed, optimized for everything except what I want to do, difficult to configure because they let you configure too much stuff and are eager to tell you in 15000 lines of manual pages about every single esoteric command no one will ever use that makes it almost impossible to find anything until you find the exact sequence of letters that will actually let you find the command your looking for and not another one that looks almost like it but does something completely different. That, of course, is if you have a web browser. I don't know what the fuck you'd even do with man. I assume you'd have to just pipe the thing into grep just to find anything.

This is not elegant. It's a bit sad, but mostly it's just painful. I hate Linux. I don't hate the kernal. I don't hate all the functionality. It's just that the people who use Linux do not think about user experience. They think in terms of translating command line functions into GUIs and trying desperately to hang on to whatever pretty graphics are cool and when something doesn't work they tell you to stop using proprietary drivers from Nvidia, except the non-proprietary drivers can't actually do 3D yet but that's ok, no one needs that stuff. Never mind that Linux mint 12 doesn't actually come with any diagnostic or repair tools whatsoever. Never mind that every single distro I've tried so far has been absolutely terrible one way or another. The more I am forced to use Linux, the more I crave for Windows and the fact that things tend to just work in Windows. Things don't work very well in Windows, but at this point that seems better than Linux's apparent preference of "either it works really well or it doesn't work at all".

We could try to point fingers, but that usually doesn't solve anything. It's part nvidia's fault, it's part software vendors fault, its partly using a 40 year old window rendering engine that's so out of touch with reality it is truly painful, and it's partly the users either being too dumb to care about the broken things or too smart to use the broken things. It's a lot of people's fault. It's a lot of crap. I don't know how to fix it. I do know that it is crap, and it is broken, and it makes me want to punch the next person who says Linux is better than everything in the face, repeatedly, until he is a bloody mess on the ground begging for death to relieve his pain, because there are no words for expressing how much I hate this, and if you think I'm not being fair, good for you, I don't give a fuck. But then I can never reason with people who are incapable of listening to alternative ideas, so its usually useless to bring the topic up anyway. I suggest hundreds of tweaks to things that need to do [x] or do [x] and people are like NO THAT ISN'T NECESSARY GO AWAY YOU KNOW NOTHING. Fine. Fuck you. I'm done with this shit.

Oh wait, I still have to do my Linux homework.

EDIT: Upon further inspection, Linux Mint 12 is melting my graphics card. The graphics card fan is on, all the time, and if I run mint for too long either installed or livecd or really anything, the fan will be on the whole time and upon restart the POST check locks up. However after turning off the computer for 10 minutes and going into windows, the temperature is at 46°C, which is far below dangerous levels, so either it has a very good heatsink or the card isn't actually melting, it's just being run improperly, which doesn't really make me feel any better. Either way, I am now in an even more serious situation, because I have homework to do but Linux Mint is literally breaking my computer. I'd try to fix it by switching graphics drivers but at this point every single driver available is broken. ALL OF THEM. I don't even know what to do anymore.

February 5, 2012

"Programmer" is an Overgeneralization

"Beware of bugs in the above code; I have only proved it correct, not tried it." - Donald Knuth

Earlier today, I came across a post during a google-fu session that claimed that no one should use the C++ standard library function make_heap, because almost nobody uses it correctly. I immediately started mentally ranting about how utterly ridiculous this claim is, because anyone whose gone to a basic algorithm class would know how to properly use make_heap. Then I started thinking about all the programmers who don't know what a heap is, and furthermore probably don't even need to know.

Then I realized that both of these groups are still called programmers.

When I was a wee little lad, I was given a lot of very bad advice on proper programming techniques. Over the years, I have observed that most of the advice wasn't actually bad advice in-of-itself, but rather it was being given without context. The current startup wave has had an interesting effect of causing a lot of hackers to realize that "performance doesn't matter" is a piece of advice riddled with caveats and subtle context, especially when dealing with complex architectures that can interact in unexpected ways. While this broken telephone effect arising from the lack of context is a widespread problem on its own, in reality it is simply a symptom of an even deeper problem.

The word programmer covers a stupendously large spectrum of abilities and skill levels. On a vertical axis, a programmer could barely know how to use vbscript, or they could be writing compilers for Intel and developing scientific computation software for aviation companies. On a horizontal axis, they could be experts on databases, or weeding performance out of a GPU, or building concurrent processing libraries, or making physics engines, or doing image processing, or generating 3D models, or writing printer drivers, or using coffeescript, HTML5 and AJAX to build web apps, or using nginx and PHP for writing the LAMP stack the web app is sitting on, or maybe they write networking libraries or do artificial intelligence research. They are all programmers.

This is insane.

Our world is being eaten by software. In the future, programming will be a basic course alongside reading and math. You'll have four R's - Reading, 'Riting, 'Rithematic, and Recursion. Saying that one is a programmer will become meaningless because 10% or more of the population will be a programmer on some level. Already the word "programmer" has so many possible meanings it's like calling yourself a "scientist" instead of a physicist. Yet, what other choices do we have? The only current attempt at fixing this gave a paltry 3 options that are just incapable of conveying the differences between me and someone who graduated from college with a PhD in artificial intelligence. They do multidimensional mathematical analysis and evaluation using functional languages I will never understand without years of research. I'm supposed to write incredibly fast, clever C++ and HLSL assembly while juggling complex transformation matrices to draw pretty pictures on the screen. These jobs are both extremely difficult for completely different reasons, and neither person can do the other persons job. What is good practice for one is an abhorration for the other. We are both programmers. Even within our own field, we are simply graphics programmers or AI programmers or [x] programmers.

Do you know why we have pointless language wars, and meaningless arguments about what is good in practice? Do you know why nobody ever comes to a consensus on these views except in certain circles where "practice" means the same thing to everyone? Because we are overgeneralizing ourselves. We view ourselves as a bunch of programmers who happen to specialize in certain things, and we are making the mistake of thinking that our viewpoint applies outside of our area of expertise. We are industrial engineers trying to tell chemists how to run their experiments. We are architects trying to tell English majors how to design an essay because we both use lots of paper.

This attitude is deeply ingrained in the core of computer science. The entire point of computer science is that a bunch of basic data structures can do everything you will ever need to do. It is a fallacy to try and extend this to programming in general, because it simply is not true. We are forgetting that these data structures only do everything we need to do in the magical perfect land of mathematics, and ignore all the different implementations that are built for different areas of programming, for completely different uses. Donald Knuth understood the difference between theory and implementation - we should strive to recognize the difference between theoretical and implementation-specific advice.

It is no longer enough to simply ask someone if they are a programmer. Saying a programmer writes programs is like saying a scientist does science. The difference is that botanists don't design nuclear reactors.