Monday, April 14, 2014

e2e testing with AngularDart.

So this weekend I spent a grueling 12 hours trying to get an e2e test harness going for a personal project of mine I built in AngularDart.

I will spare you the obscenity-laden recap and share with you the minimal amount of code needed to get an e2e testing solution going using NodeJS.

Before you ask "why not Protractor?" let me explain: it doesn't work with AngularDart. I looked into the codebase and it relies on AngularJS's internals -- I had always assumed that AngularJS just emitted DOM events as the integration point for Protractor, but it turns out there's some private services in there like $browser that are accessed directly. Since I'm not using AngularJS, that immediately removes Protractor from the running.

If you don't know what e2e is, it stands for end-to-end testing. Basically, you can unit test the hell out of your codebase, but at some point you have to make sure that based on user interaction everything works in harmony. It catches problems in your markup like not sending the right message when a user clicks an element, and other integration issues.

I'm going to be sampling files straight from my repository, so the usual caveats apply: I am a foul-mouthed asshole and have no regrets about that, I might eat your baby, etc.

1.) First off, let's get some necessary packages installed:

jasmine-node: our test framework. This allows you to use Jasmine in Node, which is great, because Jasmine is great.

coffee-script: CoffeeScript makes for very fluent, very readable test code. Your mileage my vary, but I use CoffeeScript where-ever I'd normally use JavaScript.

webdriverjs: this is our API for interacting with Selenium. Despite the name, it is not the official Selenium WebDriverJS package; it is a more fluent, node-like API. It's got some minor issues: failures from Selenium don't bubble up, so you have to check the Selenium output to see what went wrong, and why.

selenium-standalone: this package gets selenium server and has a helper executable that starts running the Selenium Standalone Server, which is required. It comes with Chrome driver.

gulp: task runner. It's simple but undocumented; I will probably be migrating to Grunt in the near future. Dart has 'Hop' but that seems just as bare documentation-wise, and it's also more complicated to get going with.

Running `npm install` (and optionally `npm install -g` so the executables packaged with those packages are available globally) gets us ready.

2.) Now let's setup our Gulpfile. Typing 'jasmine-node spec/' gets pretty tiresome after awhile.

Nothing too complicated, just some tasks to run tests. I prefer CoffeeScript, so Gulpfile.js just executes the CoffeeScript version of a Gulpfile.

Just typing `gulp` in the terminal will start executing all of our specs. I keep my specs in the subdirectory 'spec/' instead of 'test/' because why not.

Now, since this ended up being a total nightmare to get going -- I went through three or four more high-level APIs before I discovered what exactly was going wrong -- I ended up writing some "sanity checks." Here they are:

Typing 'gulp selenium' in one terminal, and then 'gulp' in another should result in 2 passing tests.

Great! Now let's setup a basic file to store our settings for an actual, but extremely simple, e2e test for our app. I'm not promising anything about the accuracy of the comments in this file.

The timeout is super large because it takes `pub serve` forever and a day to compile an AngularDart application, and I'm still not sure if that has any influence on how long webdriverjs / Selenium waits until it tries to interact with a page.

Now I wrote a small 'sanity check' test for the application itself, which basically ensured that the page loads properly:

This test just ensure that the page actually loads in the browser.

You'll need to run `pub serve` first. The first time I ran this test it took about 60 seconds for dart2js to finish compiling on a Macbook Pro.

Here is the test that gave me the most trouble: it's where I exercised the Selenium drivers beyond a simple sanity check and encountered the dozens of problems that had me pulling my hair out.

Chrome wouldn't find any elements on the page, Safari worked fine (I discovered by accident) except that the webdriver won't interact with file inputs properly, Firefox crapped out with a bug in the shadow-dom polyfill, and PhantomJS was just... I don't know, there were too many errors with PhantomJS for me to even bother with.

In the end, to get this test passing, I choose Firefox as the browser and commented out the shadow dom poly fill ("shadow_dim.min.js" in your AngularDart index.html file, probably), which caused some rendering errors but otherwise allowed the functional parts of the test to pass... but only in Firefox. I also removed some expected user behaviors and jumped straight to populating the file input.

You'll notice I have some helper jQuery statements: normally in your stylish app, the file input is hidden. You trigger it when the user clicks on another element, like a button labeled "Upload File."

However, you can't do that in Selenium. Since it's running a real browser, clicking on "Upload File" blocks the process since the OS then presents its file picker dialog. So what you have to do is use Selenium's API (webdriverjs calls this 'chooseFile'), which then simulates the process of attaching a file via a file input.

Of course, since your stylish application has hidden the default ugly file input, there's nothing for Selenium to "click" on, so the `showInputs` makes the element visible for testing purposes.

Is the `selectFile` script necessary? I don't know. It was there when I finally got everything working, and after 12 hours of hacking away trying to get a basic e2e test going it was 3AM and I wasn't about to try messing with it right then.

Anyway, that's what I learned about e2e testing AngulartDart applications.

Sunday, March 16, 2014

And the winner is...

Recently, I evaluated some modern tools to build an SPA (single page application).

The forerunners were Dart + AngularDart, or Dart + Polymer.dart.

In the end, though, I actually went with AngularJS, which wasn't even on the original list.

The major knocks against Dart + {AngularDart,Polymer.dart} was simply that they both enforced use ShadowDOM. Polymer had recently taken out "applyAuthorStyles" as well as the "lightdom" attribute, and AngularDart's templates are based on ShadowDOM.

While that's fine for some hobbyist stuff I might do with Dart, I am definitely not interested in playing with ShadowDOM when I'm trying to quickly iterate an application. I don't have the resources to constantly reinvent components  -- Bootstrap, for instance -- at every step of the process.

I understand the uses of ShadowDOM, but they just don't apply to most of the work I do. I'm not building generic widgets for everyone to use across the web, I'm building them out specifically for a particular application. Maybe they're generic enough they can get slurped out -- fine. But at the time of me writing them, I'm interested in getting them working as quickly as possible.

If I had a dedicated designer who would do all the work of crafting everything from scratch, I may have been more inclined to pick AngularDart... but then I would just be moving the burden to them instead. Either way it's still an unnecessary use of resources, especially in 2014, when you can cobble 50% of your webapp together with off the shelf components.

I suppose I could have just used "regular" Dart and some additives, but I'd be pretty unproductive, and in the end I would have something that looks like a bastard child between Angular + whatever, with nothing to show for it other than a lot of wasted time[1].

My days of reinventing the wheel were over years ago.

All that said, AngularJS has turned out to be very nice! I originally started with plain old JavaScript, but recently transitioned to CoffeeScript; productivity has improved significantly and the code is much more readable.

I've never had a problem with debugging the generated JavaScript in DevTools or Firebug, which is a complaint I hear alot about CoffeeScript. I suspect a really old versions of CoffeeScript produced some hard to read JavaScript, and that  the meme still lives because, well, this is the Web and nothing really dies here.

[1]: A project I once worked on was based off Sinatra instead of Rails because the team lead said it was "lighter." I looked at the codebase and it was basically a bastardized version of Rails: they pulled in most of the active* gems, and had half-assed "url_for" constructs that looked similar to their Rails counter-part but functioned completely different in certain ways. Developing against that codebase took forever. In some down time, I prototyped porting it to Rails proper, and it was about as fast as the Sinatra version...

Friday, February 07, 2014

Dart: I'm probably doing it wrong...

...but it feels pretty good.

This file exists purely so that I can write code like this:

      ..animate('foo1', 1000)
      ..animate('foo2', 1000);

Instead of:

    new AnimationBuilder(entity)
      ..animate('foo1', 1000)
      ..animate('foo2', 1000)

Or, without the 'DSL wrapper' entirely:

   var ani = new Animation();
   ani.animationSteps.add(new AnimationStep('foo1', 1000));
   ani.animationSteps.add(new AnimationStep('foo2', 1000));

If I could have actually have a function called 'AnimationBuilder' that would call "new AnimationBuilder" under the hood, that would have been even better. The Nokogiri gem for Ruby does this: there's a class called Nokogiri and a function called Nokogiri. The function calls the class, so you can write code like "Nokogiri(xml_content)" and just Get It Done.

I am kind of a stickler for these aesthetics issues. I want my code to look good, because it's 2014 and programming is still text-based and I have to be the one staring at the result for 8 hours a day. It doesn't matter that there is literally no savings in terms of characters: the code reads more naturally with the 'DSL' so that's what I went with. That third block of code is simply not an option, but I included it anyway because I actually wrote it once before I recoiled and said "nope" and introduced AnimationBuilder.

I'm probably dragging some of my Rubyisms into Dart, but I figure I'm an odd duck out anyway (everyone seems to be coming to Dart from C# or Java), so whatever.

Gotta have fun, or why code at all?

Monday, January 27, 2014

Half a year with Dart.

So I've been fiddle-fucking around with Dart for about half a year now. I've used it for some internal stuff, and a small open source project.


Flexible problem solving: Aside from executing arbitrary code at top-level and serious metaprogramming (both of which I sorely miss, coming from Ruby), Dart is a very flexible language. It embraces type inference, closures, top-level methods instead of one liner abstract classes. You're not going to find any amazing stuff like RSpec, Rake, or the Rails Router coming to Dart, though: the language is simply not that flexible. But if metaprogramming / DSLs / etc aren't your thing, Dart kinda lets you barrel through.

Structured ecosystem: Dart has its own package manager, coding conventions, baked in "dartdoc" comments-as-documentation -- all the things you'd expect of a 2014 language. JS is still waffling: RequireJS or CommonJS or AMD? Bower for the client, npm for the server, and you sure as fuck aren't going to be using a single package from either source for both frontend and backend dev.

It's fast: this and the clean syntax is ultimately why I chose it for a project over JavaScript. If the mythical ES6 or Harmony had shown up that day, I would have used that instead: decent performance, more flexibility, and a clean syntax would have won out over Dart's great performance and syntax.

ES6 is Java's Project Lambda: it'll arrive several years late and JavaScript devs are going to have an aneurism when they see what the module syntax looks like.

Batteries included: they packaged pretty much everything you need to get running quickly.


The library / parts syntax: the way Dart handles including files in a library is very bizarre. The library declaration is straight-forward: `library foo;` in your library file. Simple right? You also have to specify that a file is part of a library, too; that's fine: `part of foo;`.

Off to the races, right? Wrong. Because you have to go back to the library file again, and then add `part 'filename.dart'` as well. For every file in your project.

This is obviously for the compiler's benefit, because any human being wanting to know what file belongs to what project would just look at the source tree on the file system, like every other sane language on the planet.

This is flexibility without convenience.  You have flexibility in that you can, if having downed a six pack and are now in a drunken stupor, mix several different libraries and their implementation files in one directory and make out OK.

But you don't have the convenience of declaring all files in a directory to be part of your library, even though that's what 99% of how everyone everywhere lays out their source tree.

It's good to be flexible, but it's bad to be inconvenient.

Mirrors API: Mirrors are analogous to C#'s reflections API and Java probably has something similar. There's not a lot of affordance in the API: using `dart:mirrors` means you are going to be writing a lot of code unless you are using it at its most base, simple level.

No generators: Dart doesn't have generators, so List(...).where(...).map(...), is two iterations over a list; if your "where" returns every item in the list, you've done O(N) twice: once for the where, then once again for the map.

For a language trying to be fast, this is a weird oversight since List is heavily used (and expected to be heavily used) in public APIs everywhere.

UPDATE: Dart actually gets around this by using lazy iterables: where() returns a lazy iterable, and map() returns a lazy iterable based on the previous iterable. Thanks to +Lasse in the comments for pointing this out.

"The Editor Will Do It": Dart relies heavily on using an editor of some kind. The Dart Editor is great for starting, but its weaknesses show when you move beyond 1-2 files for a project: there's no built-in keyboard-based navigation, no source control integration, no syntax highlighting support / integration for LESS or SASS or HAML or Slim or any of the popular templating languages -- despite being built on the Eclipse platform.

A plugin exists for Eclipse, but it feels like a red-headed stepchild. I finally figured out how to import the files of an existing Dart project into my workspace by creating a "new" Dart project; when it asked me for the "project name" I just put the directory the existing files were located in, and walla, fooled the IDE I guess.

I have a licensed copy for RubyMine, and thus can use the Dart plugin, but it needs serious love.  It's obvious whoever is developing that plugin has never used it for anything other than ensuring the plugin seems to work.

Maybe it's just me, but having to stop coding and mouse around clicking the '+'s and '-'s on a file picker to find the file I want consumes an inordinate amount of time and makes it hard to keep track of what I was doing. CTRL+SHIFT+R and done, son.

In the end, Dart is still a respectable development choice. With ES6 being god knows where, doing god knows what, with god only knows how long until you can actually "use" it, Dart is a strong contender as long as you don't need that JS magic.

But goddamn do I want to use a Light Table supported language.

Wednesday, January 15, 2014

oDesk and eLance

I followed news of the oDesk and eLance merger with trepidation. So far, lots of "for nows" when it comes to whether oDesk will change.

Personally, as long as they remain two separate business entities, I'm not concerned. The whole "client quality" kerfuffle everyone wants to keep starting is meaningless: there is nothing inherent to eLance OR oDesk that keeps crappy clients out. For all the flamey trollbaiting, I just checked my eLance account (I have an account on all the major freelancing sites), and I see your typical lunatics pitching insane projects for a pittance, same as oDesk:

This post is for software engineer that already completed successfully an exchange for bitcoin and other cryptocurrencies.
Anyone thinking that one freelancing platform is an asylum and the other isn't is drinking some serious kool-aid. The entire "client quality" issues stem from the self-serve nature of freelancing platforms in general: without another human to say, "Hey, this is crazy and way out of your budget" what you sometimes get are clients who are projecting their hopes and dreams instead of realistic job proposals. That will never change.

All things being equal, ultimately platform preference comes down to usability and features.

Personally, I use oDesk for the simple fee structure: you get paid, oDesk gets paid. No worry about membership levels and the restrictions based on membership level. That makes oDesk's focus laser-tight on connecting freelancers and clients, and for me that's a perfect combination.

Sunday, December 29, 2013

Bitcoins and MMORPGs.

I've been mulling over this question awhile now:

How do you make a free to play MMORPG that is not driven by a cash shop but is still sustainable as a business. In other words, how do you make a "clean" game in which no gameplay decisions are driven by the concept of microtransactions?

When gameplay decisions have to factor in the business side, the game inevitably suffers: see a million generic Korean grindfests where the only way to escape the monotony and get to the action is to pay for items. Very few games seem to escape this fate. The fewer players there are, the more expensive and disruptive items have to be in order to be profitable. The more players there are, the less items have to be disruptive -- the free to play community at large won't want to deal with players who "boosted" their way to the top but have no idea how to play the game.

But what if you designed a system where every player paid to play your game, but invisibly, in the background, with no effort and cash required?


It came to me a few hours ago as I was mulling over some game concepts. A minimalistic survival game, with no real UI to speak of, no macros or quest hubs. The game is its own user interface, similar to how Dead Space handled inventory management: every interaction you had with your inventory was part of the game. Nothing stopped, the world kept running as you fumbled through your equipment while the lights in the hallway ticked off one by one and the howl of monsters got closer and closer.

In a game like that, you can't have cash shops. Can you imagine walking up to an NPC vendor in the game, talking, then clicking through some immersion breaking dialogs to click through an immersion breaking gateway in order to buy some credits for $5 that will help you cut down trees faster, all because the tree-felling portion of the game was made more grindy to encourage players to buy something to cut down trees faster?

To me, it seemed like the only real option for sustaining a game like this was a subscription, which may as well make the game stillborn. The time of subscription-based games is over. WoW has sucked the air out of the room in that regard.

But I was browsing /r/bitcoin earlier, not even sure how I got there, but just passing the time when it hit me:

What if your users mined bitcoins for you while they played the game?

Their computers provide labor in exchange for being able to access and play your MMO*. In that sense, the concept of a subscription-based MMO* can survive through cryptocurrencies. You receive a currency (Bitcoin, Litecoin, even Dogecoin) while the user themselves put no tangible effort giving that currency to you.

It seems too perfect. For end-users, the entire thing is seamless: they turn on the game, they play, they turn it off. For you, you get paid when your players are playing. If you have a love of games, this is Heaven: the more fun you make your game, the more users you get, the more money you get. You never have to compromise your vision of fun in order to make money, because a fun game that people like to play is metaphorically a "gold" (Bitcoin!) mine.

It's a crazy thought, but it does make me wonder...

Wednesday, April 03, 2013

JRuby on Heroku? Bah, humbug.

My experiments with JRuby on Heroku have mostly been a bust.

I migrated a fairly simple 2.0.0 application over to JRuby, which didn't involve much other than switching to Kramdown from RedCarpet 2, and spending some time fucking around with my `database.yml` so the `activerecord-jdbc-postgresql` gem would actually connect.

But goddamn, the problems.

I let the JRuby 1.7.3 version of the application run for a little less than 24 hours, and in that time I got:

* about 6 hours of downtime spread sporadically throughout the period
* four or five "memory quota exceeded" at peak time, which didn't seem to do anything
* several random dyno crashes I wasn't seeing under Ruby 2.0.0, which were followed by...
* ...many, many "boot timeout" errors, which I never saw under 2.0.0 unless I had a bad configuration / initializer that was causing the app to crash

And those are just the critical errors. Performance across the board was crap: the New Relic comparisons from last week versus the switch to JRuby were pretty bad looking. Lots of request timeouts. Lots of request timeouts. And request throughput was apparently crap.

That last thing -- throughput -- bothers me almost as much as the random crits above. The setup is running puma, a threadsafe web server for Rack, so I was expecting to see much better request throughput than what I actually got. For those of you not in the know, JRuby has "real" threading, where-as Ruby MRI has a GIL and green threads, which means threading performance is not very good.

Switching to JRuby should have given me a real boost in throughput, but, unexpectedly, performance was piss-poor and there was a significant increase of request timeouts. The site's not getting _that_ much traffic, and the assets are stored on Cloudfront, so...

I don't have any idea what to make of this. Apparently the setup is working well for others (I don't see any complaints on Google, at least). But with such a simple setup on Heroku going sideways, I'm not sure what to make of it. Is it a bug in JRuby 1.7.3? A Heroku-specific problem?

Meh. I've redeployed on MRI, and the numbers are turning back to normal.

That concludes that experiment.

UPDATE 4/4/2013:

For curiosity's sake, I ended up chasing the source of the downtime to a bug in Kramdown not being able to deal with some erratically formatted content. The code doesn't throw an exception, it just... hangs, I guess. New Relic reports a ridiculously long runtime of 1.5 million milliseconds, but I'm not sure if that's because it actually takes that long to render, or something (probably Herkou) killed the process, and that's just how long it took. RedCarpet has no problems with the markdown in question, which explains why I wasn't seeing any problems before the switch to JRuby.