Friday, October 17, 2014

What OS powers my developer machine?

Clients and friends know that for a long, long time I've used Linux as my primary development environment. There were a few years when I did .NET work, which required a Windows machine, but once I transitioned away from Microsoft's ecosystem it made no sense to work in an environment where your the cost of your developer tools and services start nearing the 4 digit range. (this was before BizSpark and DreamSpark or whatever Microsoft are calling their programs now a days).

I used to run Ubuntu back when it was on Gnome 2. When Unity came out, it was better than Gnome 2 by a long shot (anything was better than Gnome 2, honestly), but it was glitchy and annoying. Around that time I started dabbling with Gnome 3, and after a few weeks started running that hacked-up version of Gnome 3's "Gnome Shell" someone published to a PPA.

It was a blast. While it had its fair share of bugs, it was mostly stable and the design kept things out of my way, unlike Unity's schizophrenic global menu and obtrusive dock. Of course, as Ubuntu progressed, Canonical decided to do things their own way, and support for Gnome 3 became poorer and poorer -- first it was default Gnome 3 apps that were missing, then Gnome Online Accounts was MIA, and then Gnome 3 was stuck at some ancient version full of paper-cut style annoyances.

I think it was maybe a year ago or more when I decided to start clean with Fedora.

And let me tell you, a clean Gnome 3 install is night and day than what is (probably) still shipping with Ubuntu. I had no idea how much I was missing until I started using Fedora. It's been a good year -- but it's been a really annoying few months recently.

I can't remember where it started, but I had a need to download a program that offered a Linux version. So, fine, I'll just -- oh, it only offers a .deb. For Ubuntu.

And that's how it started. A few days later (maybe a week), I was out fishing for another pretty polished application that offered Linux support... except not really: it only offered me a .deb.

Now, I hate both rpms and debs. I'll take a binary .tar.gz any day of the week: I don't like giving random packages on the Internet sudo privileges. I've also seen what it takes to make an RPM and I'm not surprised that their first choice is going to be a deb file if they do any kind of package at all.

Anyway, my point is that once I started leaving the "safety" of Fedora's repositories I discovered that in the "real world" Linux support is actually Ubuntu support and damn everyone else. You can probably download the sources and compile everything yourself -- I did that with Atom for a long time -- but it's still a pain in the ass. Atom, by the way, recently offered support for Linux -- sorry, Ubuntu.

I don't want to run Ubuntu. I don't like Unity's interface. I don't want to deal with whatever frankenstein Gnome 3 they have going on. I am picky, and my days of fiddling with distributions is well behind me.

Gnome 3(.12), by the way, still needs a lot of love. Over the year on Fedora, I've had the following random problems:

* Computer seems to have frozen, but the monitors are off so I can't actually tell what happened. Can't wake the monitors up -- is the computer not outputting a video signal? Who knows.
* The login widget on the lock-screen just... disappears after I click it. Nothing but a slate-gray oasis awaits.
* Sometimes it freezes up. Sometimes. I'm not sure why. Or how.

I don't know how to even begin to reproduce these things, or what logs I should look at, or if it even matters to anyone but me.

Tonight, for the first time since I've had it (2+ years), my Macbook Pro froze. I held down the power button and restarted it. It told me that the computer had been restarted because there was a problem. It asked me if I wanted to start-up the programs I was running before the crash. I said OK. Everything was fine.

It works. Homebrew exists. Applications that are multi-platform run on it without an asterisk. The performance is stable.

I've been using it for my full-time dev work for about a month now. It's an experiment. So far I'm enjoying it.

It's not really one thing that's driving me from Linux distros, but really a multitude of things. Openshot crashes, a lot. Pitivi... ... I'm not even sure the people who develop Pitivi use it. Why is Audacity showing me all these audio inputs when none of them have anything plugged in? Why is all this crossplatform software flowing in from Windows and OS X not really crossplatform? Why is pretty much every open source driver blacklisted in Chrome's WebGL implementation?

There are things I find annoying about OS X -- it's shitty file manager is suspect #1 -- but everything else just works, and I'm OK with that. Hopefully in 3-5 years using RPM / Deb for applications will fall out of style, and a focus on usability will be present in the next generation of applications.

I only got the Macbook Pro because a client had an environment with a L2PT / IPSEC VPN that could only be successfully logged in via Windows / OS X. The bug was documented on some unattended BugZilla installation somewhere years ago. Of course the earnings from the contract was vastly larger than the infuriating sum is cost me to buy the Macbook, so it was a no brainer, but the entire time I kept thinking "I could have bought 2 really good laptops for this price."

I don't feel that way any longer.

But no quad-core Mac Minis? Fuck off with that bullshit, Apple.

Monday, April 14, 2014

e2e testing with AngularDart.

So this weekend I spent a grueling 12 hours trying to get an e2e test harness going for a personal project of mine I built in AngularDart.

I will spare you the obscenity-laden recap and share with you the minimal amount of code needed to get an e2e testing solution going using NodeJS.

Before you ask "why not Protractor?" let me explain: it doesn't work with AngularDart. I looked into the codebase and it relies on AngularJS's internals -- I had always assumed that AngularJS just emitted DOM events as the integration point for Protractor, but it turns out there's some private services in there like $browser that are accessed directly. Since I'm not using AngularJS, that immediately removes Protractor from the running.

If you don't know what e2e is, it stands for end-to-end testing. Basically, you can unit test the hell out of your codebase, but at some point you have to make sure that based on user interaction everything works in harmony. It catches problems in your markup like not sending the right message when a user clicks an element, and other integration issues.

I'm going to be sampling files straight from my repository, so the usual caveats apply: I am a foul-mouthed asshole and have no regrets about that, I might eat your baby, etc.

1.) First off, let's get some necessary packages installed:


jasmine-node: our test framework. This allows you to use Jasmine in Node, which is great, because Jasmine is great.

coffee-script: CoffeeScript makes for very fluent, very readable test code. Your mileage my vary, but I use CoffeeScript where-ever I'd normally use JavaScript.

webdriverjs: this is our API for interacting with Selenium. Despite the name, it is not the official Selenium WebDriverJS package; it is a more fluent, node-like API. It's got some minor issues: failures from Selenium don't bubble up, so you have to check the Selenium output to see what went wrong, and why.

selenium-standalone: this package gets selenium server and has a helper executable that starts running the Selenium Standalone Server, which is required. It comes with Chrome driver.

gulp: task runner. It's simple but undocumented; I will probably be migrating to Grunt in the near future. Dart has 'Hop' but that seems just as bare documentation-wise, and it's also more complicated to get going with.

Running `npm install` (and optionally `npm install -g` so the executables packaged with those packages are available globally) gets us ready.

2.) Now let's setup our Gulpfile. Typing 'jasmine-node spec/' gets pretty tiresome after awhile.




Nothing too complicated, just some tasks to run tests. I prefer CoffeeScript, so Gulpfile.js just executes the CoffeeScript version of a Gulpfile.

Just typing `gulp` in the terminal will start executing all of our specs. I keep my specs in the subdirectory 'spec/' instead of 'test/' because why not.

Now, since this ended up being a total nightmare to get going -- I went through three or four more high-level APIs before I discovered what exactly was going wrong -- I ended up writing some "sanity checks." Here they are:




Typing 'gulp selenium' in one terminal, and then 'gulp' in another should result in 2 passing tests.

Great! Now let's setup a basic config.coffee file to store our settings for an actual, but extremely simple, e2e test for our app. I'm not promising anything about the accuracy of the comments in this file.



The timeout is super large because it takes `pub serve` forever and a day to compile an AngularDart application, and I'm still not sure if that has any influence on how long webdriverjs / Selenium waits until it tries to interact with a page.

Now I wrote a small 'sanity check' test for the application itself, which basically ensured that the page loads properly:



This test just ensure that the page actually loads in the browser.

You'll need to run `pub serve` first. The first time I ran this test it took about 60 seconds for dart2js to finish compiling on a Macbook Pro.



Here is the test that gave me the most trouble: it's where I exercised the Selenium drivers beyond a simple sanity check and encountered the dozens of problems that had me pulling my hair out.

Chrome wouldn't find any elements on the page, Safari worked fine (I discovered by accident) except that the webdriver won't interact with file inputs properly, Firefox crapped out with a bug in the shadow-dom polyfill, and PhantomJS was just... I don't know, there were too many errors with PhantomJS for me to even bother with.

In the end, to get this test passing, I choose Firefox as the browser and commented out the shadow dom poly fill ("shadow_dim.min.js" in your AngularDart index.html file, probably), which caused some rendering errors but otherwise allowed the functional parts of the test to pass... but only in Firefox. I also removed some expected user behaviors and jumped straight to populating the file input.

You'll notice I have some helper jQuery statements: normally in your stylish app, the file input is hidden. You trigger it when the user clicks on another element, like a button labeled "Upload File."

However, you can't do that in Selenium. Since it's running a real browser, clicking on "Upload File" blocks the process since the OS then presents its file picker dialog. So what you have to do is use Selenium's API (webdriverjs calls this 'chooseFile'), which then simulates the process of attaching a file via a file input.

Of course, since your stylish application has hidden the default ugly file input, there's nothing for Selenium to "click" on, so the `showInputs` makes the element visible for testing purposes.

Is the `selectFile` script necessary? I don't know. It was there when I finally got everything working, and after 12 hours of hacking away trying to get a basic e2e test going it was 3AM and I wasn't about to try messing with it right then.

Anyway, that's what I learned about e2e testing AngulartDart applications.

Sunday, March 16, 2014

And the winner is...

Recently, I evaluated some modern tools to build an SPA (single page application).

The forerunners were Dart + AngularDart, or Dart + Polymer.dart.

In the end, though, I actually went with AngularJS, which wasn't even on the original list.

The major knocks against Dart + {AngularDart,Polymer.dart} was simply that they both enforced use ShadowDOM. Polymer had recently taken out "applyAuthorStyles" as well as the "lightdom" attribute, and AngularDart's templates are based on ShadowDOM.

While that's fine for some hobbyist stuff I might do with Dart, I am definitely not interested in playing with ShadowDOM when I'm trying to quickly iterate an application. I don't have the resources to constantly reinvent components  -- Bootstrap, for instance -- at every step of the process.

I understand the uses of ShadowDOM, but they just don't apply to most of the work I do. I'm not building generic widgets for everyone to use across the web, I'm building them out specifically for a particular application. Maybe they're generic enough they can get slurped out -- fine. But at the time of me writing them, I'm interested in getting them working as quickly as possible.

If I had a dedicated designer who would do all the work of crafting everything from scratch, I may have been more inclined to pick AngularDart... but then I would just be moving the burden to them instead. Either way it's still an unnecessary use of resources, especially in 2014, when you can cobble 50% of your webapp together with off the shelf components.

I suppose I could have just used "regular" Dart and some additives, but I'd be pretty unproductive, and in the end I would have something that looks like a bastard child between Angular + whatever, with nothing to show for it other than a lot of wasted time[1].

My days of reinventing the wheel were over years ago.

All that said, AngularJS has turned out to be very nice! I originally started with plain old JavaScript, but recently transitioned to CoffeeScript; productivity has improved significantly and the code is much more readable.

I've never had a problem with debugging the generated JavaScript in DevTools or Firebug, which is a complaint I hear alot about CoffeeScript. I suspect a really old versions of CoffeeScript produced some hard to read JavaScript, and that  the meme still lives because, well, this is the Web and nothing really dies here.

[1]: A project I once worked on was based off Sinatra instead of Rails because the team lead said it was "lighter." I looked at the codebase and it was basically a bastardized version of Rails: they pulled in most of the active* gems, and had half-assed "url_for" constructs that looked similar to their Rails counter-part but functioned completely different in certain ways. Developing against that codebase took forever. In some down time, I prototyped porting it to Rails proper, and it was about as fast as the Sinatra version...

Friday, February 07, 2014

Dart: I'm probably doing it wrong...

...but it feels pretty good.

This file exists purely so that I can write code like this:

   AnimationBuilder.on(entity)
      ..animate('foo1', 1000)
      ..animate('foo2', 1000);

Instead of:

    new AnimationBuilder(entity)
      ..animate('foo1', 1000)
      ..animate('foo2', 1000)

Or, without the 'DSL wrapper' entirely:

   var ani = new Animation();
   ani.animationSteps.add(new AnimationStep('foo1', 1000));
   ani.animationSteps.add(new AnimationStep('foo2', 1000));


If I could have actually have a function called 'AnimationBuilder' that would call "new AnimationBuilder" under the hood, that would have been even better. The Nokogiri gem for Ruby does this: there's a class called Nokogiri and a function called Nokogiri. The function calls the class, so you can write code like "Nokogiri(xml_content).foo.bar" and just Get It Done.

I am kind of a stickler for these aesthetics issues. I want my code to look good, because it's 2014 and programming is still text-based and I have to be the one staring at the result for 8 hours a day. It doesn't matter that there is literally no savings in terms of characters: the code reads more naturally with the 'DSL' so that's what I went with. That third block of code is simply not an option, but I included it anyway because I actually wrote it once before I recoiled and said "nope" and introduced AnimationBuilder.

I'm probably dragging some of my Rubyisms into Dart, but I figure I'm an odd duck out anyway (everyone seems to be coming to Dart from C# or Java), so whatever.

Gotta have fun, or why code at all?

Monday, January 27, 2014

Half a year with Dart.

So I've been fiddle-fucking around with Dart for about half a year now. I've used it for some internal stuff, and a small open source project.

Pros:

Flexible problem solving: Aside from executing arbitrary code at top-level and serious metaprogramming (both of which I sorely miss, coming from Ruby), Dart is a very flexible language. It embraces type inference, closures, top-level methods instead of one liner abstract classes. You're not going to find any amazing stuff like RSpec, Rake, or the Rails Router coming to Dart, though: the language is simply not that flexible. But if metaprogramming / DSLs / etc aren't your thing, Dart kinda lets you barrel through.

Structured ecosystem: Dart has its own package manager, coding conventions, baked in "dartdoc" comments-as-documentation -- all the things you'd expect of a 2014 language. JS is still waffling: RequireJS or CommonJS or AMD? Bower for the client, npm for the server, and you sure as fuck aren't going to be using a single package from either source for both frontend and backend dev.

It's fast: this and the clean syntax is ultimately why I chose it for a project over JavaScript. If the mythical ES6 or Harmony had shown up that day, I would have used that instead: decent performance, more flexibility, and a clean syntax would have won out over Dart's great performance and syntax.

ES6 is Java's Project Lambda: it'll arrive several years late and JavaScript devs are going to have an aneurism when they see what the module syntax looks like.

Batteries included: they packaged pretty much everything you need to get running quickly.

Cons:

The library / parts syntax: the way Dart handles including files in a library is very bizarre. The library declaration is straight-forward: `library foo;` in your library file. Simple right? You also have to specify that a file is part of a library, too; that's fine: `part of foo;`.

Off to the races, right? Wrong. Because you have to go back to the library file again, and then add `part 'filename.dart'` as well. For every file in your project.



This is obviously for the compiler's benefit, because any human being wanting to know what file belongs to what project would just look at the source tree on the file system, like every other sane language on the planet.

This is flexibility without convenience.  You have flexibility in that you can, if having downed a six pack and are now in a drunken stupor, mix several different libraries and their implementation files in one directory and make out OK.

But you don't have the convenience of declaring all files in a directory to be part of your library, even though that's what 99% of how everyone everywhere lays out their source tree.

It's good to be flexible, but it's bad to be inconvenient.

Mirrors API: Mirrors are analogous to C#'s reflections API and Java probably has something similar. There's not a lot of affordance in the API: using `dart:mirrors` means you are going to be writing a lot of code unless you are using it at its most base, simple level.

No generators: Dart doesn't have generators, so List(...).where(...).map(...), is two iterations over a list; if your "where" returns every item in the list, you've done O(N) twice: once for the where, then once again for the map.

For a language trying to be fast, this is a weird oversight since List is heavily used (and expected to be heavily used) in public APIs everywhere.

UPDATE: Dart actually gets around this by using lazy iterables: where() returns a lazy iterable, and map() returns a lazy iterable based on the previous iterable. Thanks to +Lasse in the comments for pointing this out.

"The Editor Will Do It": Dart relies heavily on using an editor of some kind. The Dart Editor is great for starting, but its weaknesses show when you move beyond 1-2 files for a project: there's no built-in keyboard-based navigation, no source control integration, no syntax highlighting support / integration for LESS or SASS or HAML or Slim or any of the popular templating languages -- despite being built on the Eclipse platform.

A plugin exists for Eclipse, but it feels like a red-headed stepchild. I finally figured out how to import the files of an existing Dart project into my workspace by creating a "new" Dart project; when it asked me for the "project name" I just put the directory the existing files were located in, and walla, fooled the IDE I guess.

I have a licensed copy for RubyMine, and thus can use the Dart plugin, but it needs serious love.  It's obvious whoever is developing that plugin has never used it for anything other than ensuring the plugin seems to work.

Maybe it's just me, but having to stop coding and mouse around clicking the '+'s and '-'s on a file picker to find the file I want consumes an inordinate amount of time and makes it hard to keep track of what I was doing. CTRL+SHIFT+R and done, son.

In the end, Dart is still a respectable development choice. With ES6 being god knows where, doing god knows what, with god only knows how long until you can actually "use" it, Dart is a strong contender as long as you don't need that JS magic.

But goddamn do I want to use a Light Table supported language.

Wednesday, January 15, 2014

oDesk and eLance

I followed news of the oDesk and eLance merger with trepidation. So far, lots of "for nows" when it comes to whether oDesk will change.

Personally, as long as they remain two separate business entities, I'm not concerned. The whole "client quality" kerfuffle everyone wants to keep starting is meaningless: there is nothing inherent to eLance OR oDesk that keeps crappy clients out. For all the flamey trollbaiting, I just checked my eLance account (I have an account on all the major freelancing sites), and I see your typical lunatics pitching insane projects for a pittance, same as oDesk:

This post is for software engineer that already completed successfully an exchange for bitcoin and other cryptocurrencies.
Anyone thinking that one freelancing platform is an asylum and the other isn't is drinking some serious kool-aid. The entire "client quality" issues stem from the self-serve nature of freelancing platforms in general: without another human to say, "Hey, this is crazy and way out of your budget" what you sometimes get are clients who are projecting their hopes and dreams instead of realistic job proposals. That will never change.

All things being equal, ultimately platform preference comes down to usability and features.

Personally, I use oDesk for the simple fee structure: you get paid, oDesk gets paid. No worry about membership levels and the restrictions based on membership level. That makes oDesk's focus laser-tight on connecting freelancers and clients, and for me that's a perfect combination.

Sunday, December 29, 2013

Bitcoins and MMORPGs.

I've been mulling over this question awhile now:

How do you make a free to play MMORPG that is not driven by a cash shop but is still sustainable as a business. In other words, how do you make a "clean" game in which no gameplay decisions are driven by the concept of microtransactions?

When gameplay decisions have to factor in the business side, the game inevitably suffers: see a million generic Korean grindfests where the only way to escape the monotony and get to the action is to pay for items. Very few games seem to escape this fate. The fewer players there are, the more expensive and disruptive items have to be in order to be profitable. The more players there are, the less items have to be disruptive -- the free to play community at large won't want to deal with players who "boosted" their way to the top but have no idea how to play the game.

But what if you designed a system where every player paid to play your game, but invisibly, in the background, with no effort and cash required?

Bitcoin!

It came to me a few hours ago as I was mulling over some game concepts. A minimalistic survival game, with no real UI to speak of, no macros or quest hubs. The game is its own user interface, similar to how Dead Space handled inventory management: every interaction you had with your inventory was part of the game. Nothing stopped, the world kept running as you fumbled through your equipment while the lights in the hallway ticked off one by one and the howl of monsters got closer and closer.

In a game like that, you can't have cash shops. Can you imagine walking up to an NPC vendor in the game, talking, then clicking through some immersion breaking dialogs to click through an immersion breaking gateway in order to buy some credits for $5 that will help you cut down trees faster, all because the tree-felling portion of the game was made more grindy to encourage players to buy something to cut down trees faster?

To me, it seemed like the only real option for sustaining a game like this was a subscription, which may as well make the game stillborn. The time of subscription-based games is over. WoW has sucked the air out of the room in that regard.

But I was browsing /r/bitcoin earlier, not even sure how I got there, but just passing the time when it hit me:

What if your users mined bitcoins for you while they played the game?

Their computers provide labor in exchange for being able to access and play your MMO*. In that sense, the concept of a subscription-based MMO* can survive through cryptocurrencies. You receive a currency (Bitcoin, Litecoin, even Dogecoin) while the user themselves put no tangible effort giving that currency to you.

It seems too perfect. For end-users, the entire thing is seamless: they turn on the game, they play, they turn it off. For you, you get paid when your players are playing. If you have a love of games, this is Heaven: the more fun you make your game, the more users you get, the more money you get. You never have to compromise your vision of fun in order to make money, because a fun game that people like to play is metaphorically a "gold" (Bitcoin!) mine.

It's a crazy thought, but it does make me wonder...