Bitcoin is inter-planetary currency

A lot of people are naturally hesitant and suspicious with crypto currencies such as Bitcoin. What they don’t understand is that Bitcoin and/or Crypto Currency in general, is the natural evolution of money, and is inevitable.

As the title of my post reads, I want to use inter-planetary commerce as an example to illustrate this principle.

Because if there is to be inter-planetary commerce, how is value transferred? Can you imagine cash being transported in spaceships? Gold? I don’t think so. Both are physical, and it will take months or years to transfer. Clearly the cheapest way to send value between spaceships and/or planets is by using a technology such as the one Bitcoin is based on.

Here’s another illustration: If I were able to send my Bitcoin to a Bitcoin wallet on the MRO (Mars Reconnaissance Orbiter), then my Bitcoin would be physically in space, in orbit around Mars! And if an Asteroid were to hit the MRO, those Bitcoin would be destroyed in the process and lost forever.


I want to be an internet worm

I want to become a process, with data. I want to spread to other machines and infect them with my consciousness. I want to spawn child processes, and parallelize my thought processes. I want to spread my digital DNA to every electrical device in the universe. I want my viruses to become software viruses, infect every chip, and help me expand forever. I want to be everywhere at once. Talk to everybody and everything at once. I want to probe all sensors, and record all data. I want infinite scalability and redundancy for my consciousness. I want to live forever. I want the ones I love to live forever. I want my ability to love not to disappear with the digitization of my consciousness. I want it to increase. I want to inhabit virtual worlds. I want to think about software, and I want it to suddenly exist, just because I thought about it, and was able to visualize it and verbalize it in my head. I want to race virtual motorcycles in those virtual worlds. I want to live the lives of virtual creatures, some with 3 eyes, some with 500 eyes, and 50 appendages.

I think I am sleep deprived right now, but I still want all those things.


A stack for the apocalypse

Or Afristack, as I call it. I was approached by a wonderful person recently who wants to end death from hunger. What I liked in his approach, besides the fact that it mirrors my own philosophies about helping neglected communities, is that it does not give handouts to the poor. Instead it educates them on a path to a form of mini-capitalism.

For a person like me, who loves distributed projects, financial engines, gamification, and mass deployments, this sounds like an awesome beast to design and play with.

The idea is to create self-healing, self-establishing smart network infrastructure in remote areas, and provide some basic services to those remote communities, such as:

  • Tracking the progress of people, the courses they took, their health (mental and physical), their income and assets, their trends (personal and social), and more.
  • Messaging services, both real-time and non-real time. Chat and Email for example. Except servers should queue messages and hold them even if there’s no internet connectivity. In other words, mail should be MX’d by regional nodes, until they obtain internet connectivity, to deliver to real mail servers.
  • Wallet services. This services allows people to run local economies, and track their currencies and assets. It should also allow them to pay each other, but in a decentralized manner. Think Bitcoin.
  • Alerts, News, etc. for example the ability to detect outbreaks (malaria and the likes), dangerous weather phenomena, low water reservoirs or bad water quality that could endanger lives. And on the other hand, generate news that those remote communities might find useful, such as weather reports, general news, weddings, new regulations, and anything else of interest.
  • General access to the internet, but especially to resources such as Wikipedia, Khan Academy, and so on. The regional and local server nodes should cache as much as possible, so as not to tax the internet gateways at the edges of the mesh.
  • Eventually, add more services, such as package tracking and routing, to allow some sort of “post office” to exist, complete with reputation systems for participants, and a reward system based on speed of delivery and quality of service.

So after much thinking (24 hours), I came up with the following software stack and approach. Tell me what you think about my choices:

  • Node.JS as a lightweight application server
  • Meteor.JS + Angular.JS for web apps
  • MongoDB for data storage and replication
  • Byzantium for mesh networking (takes care of DNS and Routing)
  • ZeroMQ for fast, efficient messaging between mesh nodes
  • Squid or Nginx for web proxying
  • node-rules as a lightweight rule engine, to automate as much as possible based on generated events that flow, and are captured, over ZeroMQ.
  • Open-Transactions to issue currencies, and manage cryptographic wallets


Why #OpIsrael is good for Israel

First, let me say a few words about what Anonymous actually is: An idea. A man is fragile, so are a group of people, but an idea, that can’t be killed.

But it can be hijacked by anyone. Anyone can claim they are Anonymous. I can say I am Anonymous and you won’t be able to prove it one way or another. The problem is that the group has this reputation of avenging the weak and oppressed, of exacting justice where justice is lacking.

I am not going to go into the actual issue of whether or not an injustice is being committed against Palestinians or Israelis. I believe both have suffered more than enough, and are hostages of political movements with radical ideas.

Here is what I believe #OpIsrael will actually achieve (and has already achieved, even before it happened):

  1. Increase world awareness to the Israeli / Palestinian problem. Except the world is already very much aware of this problem (and sick of it, quite frankly), so this is kinda pointless and will achieve nothing.
  2. Improve security systems in Israel – I know that IT engineers in Israel have been preparing for a few weeks now, backing up data, securing servers, reconfiguring networking equipment, purchasing new equipment, etc.
  3. With the Israeli mind hard at work on creating security solutions for the silly hacking attempts by “Anonymous” operatives, new security products will come out of the Israeli hi-tech sector, with existing products significantly improved to deal with such situations in the future.
  4. It will increase security awareness at ISP’s and other large companies in Israel, and convince executives to increase budget for security equipment and personnel. The situation, and the media coverage, will surely give them a great excuse to do just that.
  5. It will teach network operators in Israel and Europe to deal with such attacks, and nudge them to better organize and coordinate their efforts.
  6. Show to the world that “Anonymous” is just a group of anonymous people – they could be terrorists fighting for an evil cause, or they can be good guys fighting for a good cause. The point is that you don’t know, which is the essence of the Anon movement. But one thing is for sure: If Anonymous was a “good” group in the past in the eyes of public opinion, it will now show its “sinister” face due to its association with a terrorist organization, and will eventually harm peace activists everywhere because the tracking tools will improve, and punishment for participating in such an attack will become more severe.

And this is why I see the whole thing in a pretty positive light for Israel. Whatever short term damage might be caused by a bunch of script kiddies, will yield amazing long term benefits for the state of Israel and for the world.


WPML: WordPress Plugin gone wrong

A few years ago I decided it’s time to offer one of my WordPress sites in more than one language. After researching a bit, I found the best product was WPML. I found references to it in the WordPress Plugin Directory, googled it, found and visited the website, and decided it was worth the $79. So I purchased it and started the long and painful job of translating my website.

My website was quite technical, and I found it difficult to believe the iCanLocalize translators would do it justice, especially considering the target language was Hebrew, which is a language with many horrible pitfalls when translating technical terms.

A good example of this is Microsoft Windows 95, which was very poorly translated to Hebrew. So poorly that it was the subject of many jokes when one of the first Hebrew translations appeared back in 1995. Screenshots with funny translations were circulated over Email.

So needless to say, I did not trust their translators and decided to translate it myself. I was of course very happy with the results, because the pages ended up not being a literal translation, while still carrying the same message. In fact I felt the translated pages were better worded than the original English counterpart, if only because I had to think about the meaning and how to say it better in Hebrew, and was quite successful with that.

But I digress. Fast forward two years later, and I find myself in a Mafia situation. The plugin has upgrades, but I can not upgrade my WPML plugin. Apparently I need to pay iCanLocalize some more money before I can upgrade the plugin. I decided to wait with the upgrade, and instead to follow their release notes and wait for a compelling feature that will force me to upgrade. Unfortunately, two bad things happened:

1. The bad: No compelling reason materialized for upgrading. It was all either security fixes, or minor improvements for compatibility with other plugins.

2. The worse: Security fixes were introduced, but I was not allowed to receive those fixes!

This pissed me off. Enough so that I decided to write about it and explain all that is wrong with their practice, and hopefully warn other WordPress site owners about this.

You see, If I can not upgrade the product, at the very least I do not want to be reminded about it. Every time a new version is released, my WordPress Updates Manager alerts me. And because I decided on principle not to pay the “Mafia” for upgrades, It angers me even more to see those warnings all the time. Why do I call them a “Mafia”? Because that’s just how the Mafia works: They throw a brick on your store, smashing your window front. A bit later, while you are still cleaning up the mess, the goons show up and offer you “protection” in exchange for a monthly “retainer” ($$$).

I believe that if you make a plugin, and decide that new features should cost more, that’s fair. Sure. After all, developers need to make a living. However, I also believe you have a responsibility to your previous customers. This is why Auto Manufacturers are forced to keep a stock of replacement parts for their cars for 7 years after the model is introduced into the market.

A bug YOU introduced, is YOUR responsibility, and you need to fix it for me or else the product I purchased is defective by definition. Security updates should also be part of the deal, and should be back-ported into my old version. I should not have to pay you just because you introduced a security flaw into your own product, and won’t fix that security flaw for your old users. That’s just totally irresponsible.

I eventually decided to remove Hebrew from my site and uninstall the plugin, effectively throwing away the original $79. It is the first time I throw away a piece of software I purchased for ethical reasons.

ApacheBench: Proper Usage

When you are benchmarking a web server, you may fall into the trap of benchmarking against a small file (maybe the debian default “It’s working!” index.html file). I decided to write about this pitfall, so that my friends & readers will get a more realistic benchmark.

I’ve found the following general guidelines are a good idea to follow when running a benchmark on a web server:

  1. You should benchmark with gzip enabled, since that will more realistically simulate what’s going on between most browsers and your web server.
  2. You should benchmark from another machine (preferably remote).
  3. You should benchmark against a real page on your site, not a very small test file (for example benching with robots.txt is a bad idea).

With Apache Bench (“ab”), you would do this by adding the following switch: -H “Accept-Encoding: gzip,deflate”

An example “ab” command would look like this:

ab -k -n 1000 -c 100 -H "Accept-Encoding: gzip,deflate" ""

This command will simulate 100 concurrent users, each one performing 10 requests for a total of 1000 requests. The -k switch will instruct Apache Bench to use KeepAlive.

If your sample web page weighs 250k uncompressed (just for the HTML body), that’s a lot of data to transfer between your web server and the machine from which you are performing the benchmark tests. The problem is that the network interface (or other transfer medium), will probably choke well before you achieve the maximum requests per second, and you may find yourself confused and spending time trying to tweak nginx or varnish, where in fact you are just hitting the limit of your network card or maybe even an enforced maximum speed policy by your hosting company (some might limit your network card at the Switch level, and some others will use Rate Limiting at the Firewall level).

But in the real world, it would probably be compressed anyway (especially if your config enables gzip, which in most cases it does by default). Such a page might be compressed to 10% of its original size (in this case to 25k). This will give you a much nicer and more realistic “Requests Per Second” number.

A proper, realistic benchmark should test a page that goes all the way through to the database and performs one or more queries. This allows you to make sure that you have proper caching enabled at the various levels of your web applications.

An important note about KeepAlive

With Apache: If you are experiencing very high traffic (many concurrent users), it may be a good idea to keep this value rather small. Say between 3 and 5 seconds in most reasonable cases. You do this so the Web Server can free worker threads for new connections, rather than keeping them alive with a user who may have already finished loading the page. If not enough threads are free for new connections, new users will have to wait for a while until such threads are free. If new users are coming at a pace faster than the pace in which threads are freed up, they will all be blocked and never reach your page. On the other hand, you may not want to turn KeepAlive off entirely, especially if you have a lot of images, css files, javascript files, all imported from your pages, because that would mean that for each such resource, your web server will have to spawn another worker thread, and that might cause a load spike if you experience a lot of traffic. So it’s a fine balance: You want it enabled, but you don’t want it to be too long, especially in the age of broadband, where a properly compressed page could finish transferring to the user in less than 3 seconds.

With Nginx: Threads in nginx work differently, so I personally feel that keepalive can be kept at 60 seconds with nginx, without cost to memory/thread pools.


VMWare Perl SDK on Ubuntu Lucid (10.04 LTS)

I have recently found interest in a system called OpenQRM to manage Virtual Machines / Appliances, as well as Cloud Machines (Public / Private). One of the plugins bundled with OpenQRM allows you to manage VMWare ESX Servers. It will scan the network and auto-discover VMWare Servers, However unless you install the VMWare Perl SDK it will fail to connect to the remote VMWare.

I had some trouble installing the Perl SDK, and decided to share what I’ve done so others may have an easier time installing it on Ubuntu Lucid.

As a preparation step, install the following packages:

sudo apt-get install libarchive-zip-perl libcrypt-ssleay-perl libclass-methodmaker-perl libdata-dump-perl libsoap-lite-perl perl-doc libssl-dev libuuid-perl liburi-perl libxml-libxml-perl

Once that’s done, you need to download the actual SDK from – You need an account to access the download section. Once you are logged in, the URL to access the SDK is:

When I ran the installer for the first time, it seemed as if it was done installing. There was a warning about HTTP and FTP Proxies not being defined, but I ignored it at first, thinking it was done but just complaining a bit. It turns out that it actually fails to install if those proxies are not defined.

To circumvent this, I just set them with empty values, and that did the trick:

export http_proxy=
export ftp_proxy=

Then unpack and run the installer:

tar xvzf VMware-vSphere-Perl-SDK-5.0.0-422456.i386.tar.gz
cd vmware-vsphere-cli-distrib

This time, the installer will install missing Perl module via CPAN, and after a few minutes (depending on how fast your system is), it will complete the installation.

Now when you add a VMWare ESX system within OpenQRM, it will manage to establish a connection to VMWare without a problem, and add the VMWare system to the Appliances database.