Mediaelement.js wordpress plugin and bwp-minify (Better WordPress Minify) issue — what’s going on and how to fix it

While working on a largeish WordPress powered site recently (yeah, the advertising is abysmal but I can’t really do anything about it), I stumbled upon an issue with bwp-minify and mediaelement.js plugins. When using them both, video playback on certain platforms/browsers doesn’t work. Read on to find out what’s going on and how to apply a simple fix for the issue.


  • A wordpress-powered site with bwp-minify and mediaelement.js wordpress plugins installed and activated
  • A [ video src="whatever.mp4" ] (video shortcode) inserted in a post/page which ends up falling back to the flashmediaelement.swf or silverlightmediaelement.xap component for whatever reason (no native <video> browser support, video format requires it, explicitly specified…) — can happen often, actually
  • bwp-minify (possibly other minifier plugins too) defaultly configured so that it ends up modifying/filtering mediaelement plugin’s <script src="..."> attribute (in order to correctly point the .js file to the minifier/gzip component)


No video playback :/
The reason being that mediaelement.js ends up trying to load the .swf/.xap file through the bwp-minify plugin.


Mediaelement.js’s default handling of the pluginPath option scans loaded <script> elements on the page for known filenames (in mejs.Utility.getScriptPath()) and appends the required playback plugin component filename to the found script path.

So, due to bwp-minify doing what it does, we end up with a flash object/embed which fails to load the required flashmediaelement.swf (same thing happens with the silverlight counterpart).

Possible workaround

If you’re not the type of person to dig into the code and apply the simple fix described below, you can work around the issue by excluding mediaelement plugin’s .js files in bwp-minify options (and loose the benefits of automatic minification/gzip of mediaelement’s javascript and css files).

However, mediaelement.js has an option that can help us fix this, but it’s not being set in the current (2.5.0, currently also the latest) version of the wordpress plugin.

The fix

Modify mediaelement-js-wp.php plugin file to set the pluginPath option. This avoids running mediaelement.js’s getScriptPath() and explicitly sets the proper URL to the folder containing required flash and silverlight files.

Find this in 2.5.0 version of mediaelement-js-wp.php:


And replace with this:


Important!: If you’re going to copy-paste just the extra pluginPath line, don’t forget to also add the comma before the ‘m’ key. If you have to ask why, perhaps you’re better off just using the workaround or waiting for a plugin update.

What we’re doing here is super-simple: the plugin already has a variable called $dir which contains the URL to the location of plugin files (and is using it for the same purposes elsewhere in the code) — we’re just re-using it to add an extra key to the generated Javascript configuration object.

I’d fork and submit a pull request, but I don’t think the wordpress plugin repo exists on Github (can’t find it).

Hopefully, the plugin author will include the fix in some future version, along with an updated mediaelement.js itself (wordpress plugin version is currently at 2.5.0, and the official mediaelement.js is at 2.6.5 — flash fullscreen feature along with a few importanitish bugs squashed would really be nice — until then, manual merge ftw!)

The Facebook ads of the (imminent) future

The present

Internet advertising works. It generates profit (everyone knows the famous and often quoted moment of Internet advertising spending surpassing TV in the UK back in 2009). Shitload of business models revolve around one form of advertising or another.

Facebook’s revenues come from (surprise!) advertising. They only serve stuff from Microsoft’s advertising inventory. The CTR is awful compared to most major websites. So, basically, advertising on generally sucks. It generates revenues due to sheer volume. But from an advertiser’s perspective it sucks. From the user’s perspective too. They don’t care about ads on facebook. Source.

The future

Simple. Start serving that same inventory everywhere, as in, all over the web. Maybe even remove standard banner advertising altogether from
Yes, they can even make the advertisers bid for placements in pretty much the same way Google is auctioning their ad slots. Or not, it doesn’t really matter.

Wait, what?

Facebook’s technical infrastructure for dominating the advertising market is already built: one has to try really, really hard to find a page on the Web today that hasn’t got a Like, Share or some other type of Facebook’s Javascript widget. Through that widget they can place whatever they want. Not just the boring (con)textual ads — think standard banner formats, floaters, takeovers (interstitials), expanders, wallpapers or any other new form of promotion that’ll eventually be developed.

The user/visitor of an external site doesn’t even have to know that the ad came via Facebook. He doesn’t really care — it’s highly targeted, caters to his every need and desire, and was just what he was thinking of, or searched for, or browsed for, or [insert your advertising wet dream here] recently anyway.

And then there was data…

Facebook’s edge over every other ad platform today is: data. Tons of organic attention data, the social graph data, combined with detailed demographics as a cherry on top. Think about that for a second. OK, now consider that even AdWord’s demographic targeting is limited to users from the United States only. Now consider this: About 70% of Facebook users are outside the United States, More than 150 million people engage with Facebook on external websites every month… You get the picture.

I don’t have to remind you that Facebook knows everything about their users, even when they’re not on Facebook directly, but just browsing a page that has the Like button on it. Combined with the fact that every page/website is automatically an ad publisher (without anyone doing any extra work on the publishing end), we have ourselves a recipe for advertising domination satisfying every advertiser’s wet dream.

What’s missing?

Basically nothing. Perhaps just a few tiny changes in Facebook Platform Policies. You’ve read those before using the platform, right? :)
They currently state: “We can change these Platform Policies at any time without prior notice as we deem necessary. Your continued use of Platform constitutes acceptance of those changes.”


Yes, other major players have their own widgets, but nowhere near the numbers of Facebook’s “installed userbase”, and nowhere near the amount of data about their users. Quick recap as best as I can recall right now:

  • Google Analytics — doable, but I’m not sure they have the demographic data. No social graph either. Not at the Facebook scale anyway. And I don’t think there are quite as many GA script tags out there as there are Like buttons. 2 million websites use it, according to this. The new Facebook Like debuted in April this year (at f8 conference), and according to this it is also in the 2 million range already.
  • Youtube (which is Google’s) has the potential due to volume of users and data, but they’re trapped within Flash, and can basically serve ads only inside the Flash container. They do that, partnering with big copyright owners. They’re “hovering near profitability” — which is a polite way of saying they’re still loosing money with Youtube.
  • Forsqure is giving the idea an interesting spin, although it remains to be seen how many web pages will “install” the widget — Facebook can probably act right now.
  • Myspace could maybe use their audio player, but it would still constrain the ad placement to the profile pages. So not that interesting to advertisers (compared to pushing your ad automatically to the whole www)
  • Yahoo bet outside the browser with Konfabulator/Widgets back in 2005. Cool platform, but ultimately not very useful (from a revenue-generating online advertising perspective), the original authors left Yahoo, and the future is in the browser anyway. Right now, I can’t think of any massively deployed javascript widgets running in the browser that belong to Yahoo (in one way or another). There’s YUI, but that’s an entirely different game. Maps maybe, but again, not so widely spread.

CMYK you too. A rant. And nostalgic.

Felt like it today: CMYK you too.

Really not digging the blog form lately. Especially not for rants and such — they really deserve their own little slice of heaven, custom styling, custom typography, always different, always new. And kind of “back-to-the-future”.

With all the tweets, facebook updates, posterous, tumblrs and whatnot gaining traction and becoming increasingly popular, I find myself missing “ye old days”. You know, the times when it all wasn’t mainstream. Back then it felt good being a part of something no one (quite yet) fully understood, but you had that feeling it’s important and BIG for some reason, and every day was something interesting and new.

Or I could just be getting “too old for this shit”. Love that phrase!

The importance of peers at the work place

tl;dr: What?

I cannot stress how important it is to have someone to share your accomplishments / suffering with at work. Often overlooked, this is one of the most important feats to look for in your future employer’s offering. Make no mistake about it.

Case in point

I was having a really hard time today with some of Croportal’s partner feeds. Two unrelated issues came up, both having something to do with parsing Atom feeds using Zend_Feed:

  • A certain site published it’s feed in Atom format using non-absolute URIs within the <content> element. That’s not a problem per se, since the spec allows it (and defines how to deal with that). However, Zend_Feed has no knowledge of what’s going on (and that’s probably ok, since it’s a general purpose lib). However, publishers place whatever HTML content they want within their <content> element and expect the same behaviour from aggregators as they get from browsers when they open their feed URLs in them (ie. Firefox). This means you have to do a shitload of “magic” on your end. And that shit ain’t trivial.
  • Another site published their feed using Atom as well, except they decided to use the (rarely used) <content type="xhtml"> feature of the Atom protocol. Which is all fine and dandy untill you actually have to pull the content and display it on your end (being an aggregator). The spec is somewhat vague and yet pretty specific at the same time. Except it doesn’t cover what to do in cases such as this:
    <content type="xhtml">
       <div xmlns="">
          <p>Paragraph with an <img src="whatever.jpg" /><![CDATA[This is <strong>XHTML</strong> content.]]></p>

    Which kind of sucks, since you’re on your own now. I have an actual publisher with such a feed. And the feed validates. What does one do? Well, you start introducing crap code into your codebase. Shit such as this, to handle the case described above:

    if ($item->content && $item->content['type'] && $item->content['type'] == 'xhtml') {
        $item_simple = simplexml_import_dom($item->content->getDOM());
        $item_summary = $item_simple->asXML();
        // end-tag is fixed in form so it's easy to replace
        $item_summary = str_replace('</content>', '', $item_summary);
        // remove start-tag, possibly including attributes and white space
        $item_summary = preg_replace('/<content[^>]*>/i', '', $item_summary);

    This sucks on so many levels it’s not even funny. Zend_Feed’s $item->content() method returns only the raw text, but my use case requires the surrounding elements as well (images etc). So, I hack my way around all this using SimpleXML which allows me to (somewhat) easily dump the structure into a somewhat acceptable form of HTML.

    Of course, later on you have to call strip_tags() on content such as this (to display it safely on your end), but — surprise, surprise — you run into another issue: if you have a <![CDATA[ string anywhere within a larger string that you call strip_tags() on, you’re gonna get an empty string back as a result. How’s that for fun, eh? This is where you start pulling your hair out and thinking male prostitution ain’t such a bad line of work after all.

Folks, I’m not making this shit up. This is real world PHP on a (relatively) large scale project.

End result

Several hours and hundreds of lines of code later — both issues are fixed. And everything works flawlessly. Except no one but me is aware of the fact.

No one in the entire company hasn’t got a fucking clue about what has transpired today. Or why shit like that matters. Or how much future money it has saved. Or that I’ve written an URL parser library that can be used generically for any possible scenario that deals with relative/absolute URL conversion, URL joining, URL parsing etc.

Worst of all, no one could hear (and understand) my cries about how horribly broken Zend_Feed, parse_url(), strip_tags(), DOM etc. are.

WordPress core update not working? Suhoshin might be the reason.

Ran into some trouble trying to update to the latest version (2.8.6). The automatic core updater kept dying on me with nothing but an ‘Unpacking update’ message. Nothing in the error_log file, no core dumps, no warnings, just stuck. And it used to work flawlessly until now.

Anyways, it’s sorted now — and updated — but here’s the scoop in case someone else runs into a similar situation:

WordPress tries to increase the memory limit of php to ‘256M’ during the unzipping process of the update. If you’re on a shared hosting setup, chances are it’s running php with the suhoshin patch. Suhoshin was (and still is) set up to prevent raising the memory limit via ini_set() over the size defined in php.ini (currently at 32M).


  • Ask your hosting company to temporarily increase or remove the suhoshin-patch limit, run the update, let them revert the limit back (over in 2 minutes)
  • Modify your ‘wp-admin/includes/file.php:484’ and try setting a lower limit (128, 64?) and see if the update process manages to extract all the files successfully (it probably will)
  • Update your wp manually

Pyrrhic victory: me vs customs office – 4:1

Latest Threadless package arrived, 9 t-shirts this time. I think this will be my last order from them. Ever. They seriously need to re-think the t-shirt quality. As it stands now, it’s nowhere near what it used to be, and that’s unacceptable. This last batch is thin cotton bullshit. But the designs are what they are, so I caved and got this last batch of reprints I just had to have.

Anyways, no extra charges from the customs this time. Package value stated as 108 USD, package opened as usual, no gift wrapping options, no nothing. I win! Kind of.

Perfect Pitch

Just a short blurb in hopes of helping Adactio get that perfect pitch. You should probably link to it as well, if you’re inclined to fighting SEO-motivated DMCA takedowns. What a load of crap.

Twitter lists — what they’re really about

Twitter lists have hit the interwebz recently. Cool, I guess.

Everyone’s talking about whether they should be public or private, if it’s all just another pissing popularity contest, how the “a-list” is getting even more popular, if it means this or that… None of it matters.

What everyone hasn’t yet picked up on (but they will, eventually) is this: Twitter just added people tagging. What? Yes, people tagging.

And it couldn’t be easier, and the users aren’t even aware they’re doing it, and practically every existing and future twitter user will do it at some point. Just imagine the power of having that kind of data. Every user tagged with their interests, location, whatever you need. It’s priceless.

Think about it. Twitter is (mostly) about people. To create a list, you need to name it. Adding someone to that created list is just a click away, and you’ve just made a “tagging” statement about that person. You’ve assigned a topic/name/whatever (a tag) to a person.

Now Twitter will know how (what/who/when/with etc.) others associate with you. And they’ll know what you associate with others. Those tags describe you, and your group to the point of being practically exact science (after a while of gathering data). That means your and actions of those around you become easily predictable — or at the very least — “guidable”.

Once they have that kind of data, getting a 1$ or 10$ revenue per user (that’s Twitter’s “plan”) is peanuts. No one else has that kind of data. It’s no wonder every major player in the business decided to strike a deal about “real-time data” with them.

What they’ve done with the API ecosystem is equally amazing. The Twitter experience has nothing to do with the website, and everything to do with applications built by third-party developers. That’s for a reason too — you can’t focus on the core of your business while you’re being pestered with phone X not supporting feature Y or browser Z not playing nice on platform W.

The core business is gathering and munging incredible amounts of structured data about people, not relaying 140-character-long messages. Don’t forget that.

Solving The Existing UUID Error For Running Multiple IEs

Several readers reported running into troubles trying to install multiple VMs with different IE versions (following my previous howto).

Installing one works just fine, but when you try to create another VM using another one of those Microsoft’s .vhd files you run into an error: VirtualBox is complaining about a hard disk with the same UUID already existing and whatnot.

Having separate VMs for separate browser versions is a nice thing to have, but it’s not possible by default, because all of Microsoft’s .vhd files have the same UUID.

Here’s how we can work around that, until Microsoft decides to change UUIDs, or until Sun’s undocumented and unsupported command (VBoxManage internalcommands sethduuid <file>) starts working on .vhd images.

Convert your .vhd images

The process consists of converting our .vhd images (which are basically vpc images) into .vmdk images, which automagically gives them a new UUID.

First we install Qemu:

sudo apt-get install qemu

After that, change into the directory holding your .vhd image and run:

qemu-img convert -f vpc image.vhd -O vmdk image.vmdk

That’s it, now you can use the newly created .vmdk images as hard disks for your additional IE virtual machines. Have fun.


Before (23.02.2009 10:03; morning of the surgery):

After (23.02.2009 14:10; right after the surgery, about 10mins after I woke up from narcosis):
Screwed. Clearly.

Getting another x-ray probably on Friday, to see if it healed ok and if I can start physiotherapy finally.