Venture Capitalists Hijack “Angel Investor” Term?

I read an article today about some “Angel” group in New York City, and I realized something horrible is happening. It seems VC’s are trying to disguise themselves as Angel Investors. Maybe to attract more startups? Maybe to appear less frightening? Maybe to shed the negative “aura” attached to VC’s?

It used to be that Angel investors gave small amounts of money to startups, not because they were looking to make an “Exit”. The meetings were one-on-one, it was personal, it was friendly warm and humane. The Angel investor remembered himself being in the Entrepreneur’s shoes, and was not afraid to say things that were perhaps not very politically correct. When you’re doing a one-on-one, that works fine, but when you’re in a group, you will most likely remain politically correct, and refrain from giving the kind of advice you’d give in a one-on-one session.

So when I read that article about how that group of “Angel Investors” operates, it sounded to me more like a group of Bankers or VC’s, than a group of Angels. I was quite annoyed to say the least. While I have no problem presenting my company to a crowd, when it comes to an investment meeting, I do not believe elevator speeches or 15 minute presentations are the right tool to present to Angel investors (or in fact to any investor).

How it’s actually supposed to work

I believe in two people meeting and taking a deep dive into the core of the startup being presented. One is the Entrepreneur, who may be seeing his own startup for the first time being reflected from the perspective of the Angel investor (an eye opening experience, to some!), and the second is the Angel investor, who has the challenge and responsibility of getting into the Entrepreneur’s mind.

I do not believe in a murder of crows (bunch of VC’s sitting in a room, with a lone entrepreneur presenting to them). This is not “American Idol”. It is criminally presumptuous to think that you can judge a startup from one short presentation. It’s also sad to think that a startup might get filtered out because the entrepreneur fumbled some of the answers. Not everybody was born a Steve Jobs, and even Steve Jobs was not born Steve Jobs. He was shaped and hardened by his life and experiences, over a very long period of time. An entrepreneur is not there to entertain with a magical presentation that came out of a unicorn ass hopping happily on a rainbow.

Any veteran partner of any business will tell you – A partnership is like marriage. And like in most good marriages, you will find people who nurture each other, allow each other to grow, allow each other to make mistakes and learn from those mistakes, not judge each other so harshly all the time, etc.

So yes – I believe in one on one sessions, where even if the entrepreneur is rejected by the angel investor, at least he came out of it with tons of personal advice; he came out of it smarter. Maybe the angel investor also learned something. Maybe he learned something he would never have learned, about that person standing in front of him, if he were in a room full of other angel investors. Something that might have changed his mind, and made him into a fan and a believer, rather than a minute-judge.

Angels can’t possible know everything, right? After all, while many of them succeeded because of personal sacrifices and some very hard work, some of it was also pure luck, being in the right place in the right time, and knowing the right people – How many people know the right people? Despite the world being small, not too many.

So be humble, be humane, be respecting of yourself and others, regardless of how many silly ideas were/are presented to you. You are dealing with people, and while money is important, so is the journey. Do you feel a bit “burnt”? Do the right thing and take a short (or long) vacation from investing, just whatever you do, don’t vent your accumulated frustrations on entrepreneurs.

So am I doomed as an entrepreneur?

No. I believe the good old Angel Investors still exist, and are out there somewhere. You just need to steer clear from the rotten ones. When you consider meeting with an angel investor, check the following list:

  • Was it easy to get a meeting with the angel investor? or did they do you a favor and gave you a meeting some 30 ~ 50 days from today? Bonus points if you managed to talk to the Angel Investor directly, and did not have to go through bureaucracy or secretaries.
  • Are they investing on their own, or in “packs of wolves”? I recommend you avoid “panels” with 3 or more investors. If an Angel investor is incapable of deciding on his own whether or not he likes your idea, you don’t want him or his money.
  • Are you expected to prepare a detailed business plan? show traction? That’s not angel investment, that’s venture capital. Stay away from those clowns looking to make money on your exit, and find a real Angel Investor.
  • Is the investor constantly late, difficult to reach, canceling/postponing meetings, and is generally flaky? You should ask yourself if you really want to do business with a disorganized entity. Especially as you take your first steps setting up your venture, you can not afford wasting time on cancelled meetings and flakiness.
  • Are they behaving like assholes, trying to lump sum your ideas into a narrow category, and completely missing the point? Telling you there is already a company like yours out there? (Google was not the first search engine, Yahoo was not the first news portal, and Skype was not the first VoIP app!). Are they just plain being rude? Get out of there and don’t come back. And warn your friends.
  • Is the Angel Investor knowledgable? Are they asking you smart questions? Do they understand your answers, or are they foggy at best? The best investors, are investors who understand your venture on a deep level. They may even become your fans and followers, if your idea really touched them. That’s the kind of Angel Investor you want, and should strive to find.


Why #OpIsrael is good for Israel

First, let me say a few words about what Anonymous actually is: An idea. A man is fragile, so are a group of people, but an idea, that can’t be killed.

But it can be hijacked by anyone. Anyone can claim they are Anonymous. I can say I am Anonymous and you won’t be able to prove it one way or another. The problem is that the group has this reputation of avenging the weak and oppressed, of exacting justice where justice is lacking.

I am not going to go into the actual issue of whether or not an injustice is being committed against Palestinians or Israelis. I believe both have suffered more than enough, and are hostages of political movements with radical ideas.

Here is what I believe #OpIsrael will actually achieve (and has already achieved, even before it happened):

  1. Increase world awareness to the Israeli / Palestinian problem. Except the world is already very much aware of this problem (and sick of it, quite frankly), so this is kinda pointless and will achieve nothing.
  2. Improve security systems in Israel – I know that IT engineers in Israel have been preparing for a few weeks now, backing up data, securing servers, reconfiguring networking equipment, purchasing new equipment, etc.
  3. With the Israeli mind hard at work on creating security solutions for the silly hacking attempts by “Anonymous” operatives, new security products will come out of the Israeli hi-tech sector, with existing products significantly improved to deal with such situations in the future.
  4. It will increase security awareness at ISP’s and other large companies in Israel, and convince executives to increase budget for security equipment and personnel. The situation, and the media coverage, will surely give them a great excuse to do just that.
  5. It will teach network operators in Israel and Europe to deal with such attacks, and nudge them to better organize and coordinate their efforts.
  6. Show to the world that “Anonymous” is just a group of anonymous people – they could be terrorists fighting for an evil cause, or they can be good guys fighting for a good cause. The point is that you don’t know, which is the essence of the Anon movement. But one thing is for sure: If Anonymous was a “good” group in the past in the eyes of public opinion, it will now show its “sinister” face due to its association with a terrorist organization, and will eventually harm peace activists everywhere because the tracking tools will improve, and punishment for participating in such an attack will become more severe.

And this is why I see the whole thing in a pretty positive light for Israel. Whatever short term damage might be caused by a bunch of script kiddies, will yield amazing long term benefits for the state of Israel and for the world.


On Desktop Wars, and why we are all losing

One of the reasons Macromedia Flash is so problematic and on its way “out” is that it was always a plugin. It does not have direct access to the browser’s DOM, nor do scripts on the page have access to the internal DOM within the Flash object. They are two separate entities, running in two separate threads. Flash, being somewhat “dangerous” because of the kind of functions it can potentially expose, is executed in a “sandbox” which is supposed to keep it from doing any damage to the system. If Flash was properly designed, it would simply be an “engine” that offered accelerated rendering, advanced animation functionality, and new DOM object types, and it would have exposed those functions and types to the DOM. Developers could then script using those advanced functions using JavaScript instead of ActionScript, and they would be able to utilize the new Object types. In addition, since all Flash objects are actually within the browser’s DOM, the Flash functions could potentially work on ANY HTML Element, be they tables, div’s, span’s, etc. Imagine the possibilities, the richness of the documents we would get. The “Web” would be so much nicer and more advanced. We could have had Web 3.0 back in 2001. And I argue that the VIDEO element, out of lack of necessity  would probably not have been invented if Flash had just done this one simple thing, of exposing their functionality as functions and object types within a browser’s DOM, instead of keeping them separate in their own proprietary container that nobody can touch.

I see this as parallel to the way current Window Managers function alongside X11. Instead of introducing specific behavior into the core window managers, I would instead just expose raw functionality. In other words, Animation functionality, Effects, Shaders, Object Types, etc. And I would then allow scripting it all via Scripting Engines that would plug into the DOM. You could have packages that define the UI/UX language, that are written in either JavaScript or ChaiScript or really any scripting engine. As long as they have access to the “DOM” (In this case being Desktop Object Model), they can manipulate the objects on the desktop, and use the functions exposed into the DOM by the various plugins and by the WM itself. One could easily produce a Metro UI clone, using a bunch of scripts that manipulate the DOM. Or really any other UI, BeOS, Amiga, anything really. It could be 3D, or 2D, or even 4D! It could even be networked into some additional nodes on the cloud where some of your desktop element could live on. Once you have a DOM which can be serialized into JSON objects, the potential is infinite. Imagine how simple it is then, to share an application across networks? Yours looks the way you like, with your graphics and eye candy, but the substance, the “juice” of what you’re viewing, is transferred to someone else’s desktop, and there, his own UI Scripts apply their own animations and eye candy to the same application/content, and they see it the way they like.

I believe this is where Desktops will eventually go. There will be a DOM (Desktop Object Model), there will be “plugins” that introduce eye candy features, shaders, effects, maybe physical modeling effects, plugins that fetch data from the net and inject them into the DOM, etc. And there will be the underlying engine which just lets those plugins access the hardware via some HAL. This is the kind of desktop I would love to have in the future. A desktop where everything can be scripted. Where I can decide myself how windows are maximized, and where I can decide that if some window is maximized, I want to trigger certain specific actions, like lower the priority of all other applications, if the window I just maximized is a Media Player and I want to watch a movie without other apps bothering me or using the CPU too much (Just one example, I’m sure people will come up with amazing things once they have a DOM and plugins that allow scripting the DOM).


Goodbye Apple?

I don’t like the way things are going with Mac OS X. My fears that something was slowly getting worse, were validated yesterday while visiting the Apple Store’s Genius Bar. While my laptop was being checked for issues, I told the Genius who was taking care of me that I have not rebooted the laptop in 27 days. He said that not rebooting might cause instability. He recommended I reboot my laptop at least once a week. When I looked at him incredulously, he called another Genius who was standing in the next stall, and that other Genius confirmed it.

Really? An Unix OS with a BSD Core going unstable if not rebooted every week?

The truth is that I look at the system logs, and I don’t like what I’m seeing. There are things going on under the hood that are quite worrying and annoying. There are too many errors and warnings in the logs. For a company that prides itself on producing cleanly designed products, I would expect the same philosophy would be applied under the hood. Unfortunately, this is not the case.

The “rumor” in chat rooms is that in truth, Apple hates us Geeks. That the ideal Apple customer is the typical mindless Zombie user, who buys products because of their aesthetics and because they “just work”. The user who wants simplicity, and who won’t sniff around log files. It feels almost like Apple has become more of an “iOS” company, and is becoming less and less of a “Mac OS X” company.

This caused me to start researching into alternative hardware – a thin & light 15″ laptop that will run Ubuntu Desktop for me, have a quad core i7 processor under the hood, with at least 8Gb RAM, and a very fast GPU with dedicated 1Gb RAM. It is this search that made me realize that Apple’s competition is shooting itself in the foot, and is literally driving consumers away into Apple’s open arms.

However: The non-Apple scene is a mess!

How come? Well, When was the last time you tried to shop for a “PC” Laptop recently? Have you seen how many processor options are available even within a single family of processors? How many Intel i5 and i7 variants are out there? How many GPU Types and Variants are out there? How many types of disk drive standards? Sizes? Speeds? Protocols? Cache sizes? TRIM Support anyone?

The truth is that the moment you step outside of Apple’s realm, you find yourself in a jungle. The “Experts” all have their opinions on what the best laptop/desktop is. Who do you trust? Who do you believe? And for how long will their opinions hold true? Probably not too long…

This reminded me of that research about consumer happiness as a function of choice. It turns out that when consumers have too many options to choose from, they will first be overwhelmed with the selection, and later, they will be unhappy with their selection,  thinking there might have been a better product they could have chosen. But give them just 2 ~ 4 products to choose from, and they will be absolutely happy, believing they selected the best product. Knowing this, let’s look at Apple’s product offerings: You basically get to choose whether you want a light laptop with less features, or heavy laptop with more “pro” features, then the screen size and resolution, and then the amount of RAM and Disk Space, and you’re done!

If only it was so simple with non-Apple hardware, I am pretty certain less people would switch to Apple products.

One thing is for sure: Apple is beginning to disappoint me, and I am now on the look out for a great, thin, sturdy laptop with great specs. It will run Linux for me, and will NOT require a weekly reboot just to keep things “stable”.


WPML: WordPress Plugin gone wrong

A few years ago I decided it’s time to offer one of my WordPress sites in more than one language. After researching a bit, I found the best product was WPML. I found references to it in the WordPress Plugin Directory, googled it, found and visited the website, and decided it was worth the $79. So I purchased it and started the long and painful job of translating my website.

My website was quite technical, and I found it difficult to believe the iCanLocalize translators would do it justice, especially considering the target language was Hebrew, which is a language with many horrible pitfalls when translating technical terms.

A good example of this is Microsoft Windows 95, which was very poorly translated to Hebrew. So poorly that it was the subject of many jokes when one of the first Hebrew translations appeared back in 1995. Screenshots with funny translations were circulated over Email.

So needless to say, I did not trust their translators and decided to translate it myself. I was of course very happy with the results, because the pages ended up not being a literal translation, while still carrying the same message. In fact I felt the translated pages were better worded than the original English counterpart, if only because I had to think about the meaning and how to say it better in Hebrew, and was quite successful with that.

But I digress. Fast forward two years later, and I find myself in a Mafia situation. The plugin has upgrades, but I can not upgrade my WPML plugin. Apparently I need to pay iCanLocalize some more money before I can upgrade the plugin. I decided to wait with the upgrade, and instead to follow their release notes and wait for a compelling feature that will force me to upgrade. Unfortunately, two bad things happened:

1. The bad: No compelling reason materialized for upgrading. It was all either security fixes, or minor improvements for compatibility with other plugins.

2. The worse: Security fixes were introduced, but I was not allowed to receive those fixes!

This pissed me off. Enough so that I decided to write about it and explain all that is wrong with their practice, and hopefully warn other WordPress site owners about this.

You see, If I can not upgrade the product, at the very least I do not want to be reminded about it. Every time a new version is released, my WordPress Updates Manager alerts me. And because I decided on principle not to pay the “Mafia” for upgrades, It angers me even more to see those warnings all the time. Why do I call them a “Mafia”? Because that’s just how the Mafia works: They throw a brick on your store, smashing your window front. A bit later, while you are still cleaning up the mess, the goons show up and offer you “protection” in exchange for a monthly “retainer” ($$$).

I believe that if you make a plugin, and decide that new features should cost more, that’s fair. Sure. After all, developers need to make a living. However, I also believe you have a responsibility to your previous customers. This is why Auto Manufacturers are forced to keep a stock of replacement parts for their cars for 7 years after the model is introduced into the market.

A bug YOU introduced, is YOUR responsibility, and you need to fix it for me or else the product I purchased is defective by definition. Security updates should also be part of the deal, and should be back-ported into my old version. I should not have to pay you just because you introduced a security flaw into your own product, and won’t fix that security flaw for your old users. That’s just totally irresponsible.

I eventually decided to remove Hebrew from my site and uninstall the plugin, effectively throwing away the original $79. It is the first time I throw away a piece of software I purchased for ethical reasons.

Lessons learned about the A13 OLinuXino with A13-LCD7-TS

I have recently purchased two of those very sweet Olinuxino A13 boards for a new project I’m working on. I bought them directly from Olimex in Bulgaria, only to later discover that I could have ordered them from another website in the US. The prices are a bit higher, but shipping is lower so for small quantities it might work better.

I also bought two LCD Touch screens, two MOD-RS232, and some other boards that I plan to experiment with further down the road (for example to control higher voltage relays).

What this guide contains

In this guide I will show you:

  1. What hardware to purchase for your development kit
  2. How to build your own custom Linux Kernel
  3. How to build a bootable SD Card with Debian Wheezy
  4. How to get X11 working on the A13-LCD7-TS with Debian
  5. How to get the Touch Screen to work with X11
  6. How to start a locked-down, full screen web browser

By the end of this guide, you will hopefully learn how all the components work together, and as an added benefit you will also get a fully working development system that allows you to comfortably start working on your product, based on a very solid platform, with stable, reliable, and reproducible results. I believe learning is more important than doing, and this is why I’m making this guide as detailed as possible.

Another important aspect is the comments to this post, which come from pretty smart people who tried all this out, and had really important feedback. Without this feedback, this guide would certainly be lacking. I would like to thank all of you for taking the time to give feedback so that others may have a better configuration with their systems.

My initial impression

I was initially trying to purchase a Raspberry Pi, but supplies being non-existent I was forced to look for alternatives, and I’m very glad I did because the end result is that I have a more powerful platform to work with. It’s faster, has 3 USB ports with tons of gpio pins, has an on-board nand chip, wifi adapter, sd card reader, audio in/out, and the pretty useful UEXT connector. It runs very cool, and since there are no moving parts or fans it is naturally silent.

In addition to this specific product being quite amazing, feature wise, Olimex has a very talented team of engineers who crank out new boards and designs at a pace rarely seen. They are already working on an A10 board, which packs even more impressive hardware and features. Once in a while they even find the time to write a new guide or how-to, and post it on their blog.

Apart from the Olimex engineers who are obviously dedicated to their cause, there’s a good number of individuals working on the “ARM Netbook” project. The Olimex and ARM engineers, as well as the community of users, all hang out on the Freenode IRC Network, on channels #olimex and #arm-netbook (If you have an IRC client you can click those links to directly join the rooms). I strongly urge you to join those chat rooms if you have questions, or if you feel you can help others with your skills and knowledge.

And if this is not enough to convince you, consider this: All Olimex products are 100% open source hardware, which includes the CAD files for the boards, routing, etc. You can truly do whatever you want with the designs, and all the designs are fully available and downloadable from their github account. This is not the case with the Raspberry Pi where some parts are closed with strange excuses given by project members.

Important adjustment of expectations

I think that for a short while I was under the illusion that things will just work great out of the box. I was quickly disillusioned however, because the problems piled up and some of them proved quite a challenge initially. Thankfully, the Olimex hardware has quite a following; a mix of hardware enthusiasts as well as commercial entities purchasing the boards to include in their embedded linux projects. They all socialize and help each other on the Olimex forums.

On the other hand, the issues I encountered are all software related, and platform specific. This means that with some reasonable effort and enough reading, they can all be solved. I solved my issues within a week, and I am now very pleased with the results. I am by no means done – As of this moment, I still have a segmentation fault with the touchscreen driver, but I’m sure I will have this issue resolved within the next 24 hours.

So what I learned is something I should have known right from the start: This is not a mass market product. It is a very new product (May 2012!), and this means whoever utilizes this platform is pretty much on the bleeding edge of open source hardware technologies, and this is not without implications. On the other hand, this is what makes it fun. Reading data sheets, really understanding how the board functions, what components are on the board and why they were chosen, what each pin does in the various on-board connectors, and so on and so forth. This is a platform that forces you to take off your shirt and dive into unknown waters, but you get to swim with some pretty cool fish!

Simple challenges during my first order

No Power Supply: While ordering the boards, I needed to purchase power supplies separately. Olimex does not sell power supplies for America. A quick search on Amazon based on the power rating, polarity, and dimensions of the connector, and I found a pretty good power supply on Amazon. It arrived within a few days and I was able to power the boards.

No LCD Cable: This is when I realized the LCD comes without the cable that connects it to the board. Fortunately Olimex support (Tsvetan) replied that the cables are the same as IDE cables. Again Amazon to the rescue, and within 2 days I had two cables and the LCD was hooked up to the board. At this point I was able to boot the system, which comes preloaded with Android.

Wrong Screen Resolution: When Android booted I noticed the entire screen is offset some 20% to the right and some 10% to the bottom. The touch screen however was properly calibrated, so I had to touch the icons in the area where I thought they should be. This is pretty easy to resolve, as I found out later, but not from Linux or Mac. You need to install a program called LiveSuit on Windows. A port of LiveSuite for Linux exists, but it does not recognize the Android IMG file. I tell this to you now, but it took many days of trying and failing and talking to people online until I received a final confirmation that it only works on Windows, because of people at Allwinner, the company that produced the SOC (Systen On Chip).

Bad USB to Serial Strategy: I had a PL2303 at home, so I didn’t buy the USB to Serial cable sold by Olimex. This was a mistake because the one I had did not work very well. I should have just purchased it but I didn’t think I would need it. It ended up being very important, because it helps you login to the console and configure the OS to connect to the network, for example, or to see the output from u-boot while the system is booting. It’s just a good idea to have console access.

Which brings me to the next section for any person considering buying Olimex products, or any products in general since this does not apply only to Olimex but to any hardware project.

What to buy as a starter kit

I feel I have learned quite a lot about the platform because of all the mistakes I made and all the obstacles I encountered, so I am thankful that I made those mistakes. However if you are in a hurry to create a commercial product based on this platform, you will avoid all the mistakes I made by simply buying the correct products for your kit.

This is my recommended list of materials for any person trying to develop on this platform for the first time:

  • From the Olinuxino A13 category:
    • One of the Olinuxino Boards (based on whether you need 1 or 3 usb ports, wifi or no wifi, 512mb RAM or 256mb RAM on the Micro version, etc). I purchased the most expensive one A13-OLinuXino-WIFI
    • One of the LCD products, without the touchscreen A13-LCD7 or with a touchscreen A13-LCD7-TS (that’s the one I have). I like having the LCD because it’s cheap enough and will not take away one of your monitors while you work with your kit. The size is perfect for development purposes and the resolution is high enough (800×480) which makes the fonts quite sharp and readable.
    • One of the SD cards (A13-OLinuXino-MICRO-SD or A13-OLinuXino-SD), it comes preloaded with Debian Linux (Wheezy), and will save you a lot of time and efforts (and most of the mistakes I made).
  • If you are getting an LCD, get this cable: CABLE-IDC40-15cm
  • Get the following cables for sure:
  • If you live in Europe or a country with European power sockets and voltage, definitely get this power supply: SY0612E
  • If you plan on using an off the shelf USB to Serial cable with a DB9 connector, definitely get this UEXT adapter: MOD-RS232

Lessons about building and booting a Debian SD Card

The main lesson here is that if you want to be up and running as soon as possible, that you should not even focus on trying to build the Debian image yourself. Olimex sell you the SD Card with the Debian image already pre-built for you.

On the other hand, I do feel it is a great learning experience. And more importantly, I believe that if you are serious about the product you are building, you have to learn how the parts all fit together. Building your own Debian image is a great way to learn about that.

Remember how I said in the beginning that this is all very fresh technology? Some of the options I’m going to show you here, were only just committed into the Github repository of the sunxi-bsp project by techn_ from IRC.

Before we start – Installing Prerequisites

It’s important to have a correct system time, or you will get warnings about some dates being in the future. While we’re at it, let’s also install git, the compilers and other utilities required to build our packages:

# Install compilers and related utilities
apt-get install build-essential git automake autoconf libtool ntpdate pkg-config

# Let's make sure our system time is correct

If you connect to your A13 over SSH (Like I do, which is way more convenient than working on the A13 directly), then it is quite possible the network will disconnect after you update the system time. I’m not exactly sure why this happens (I have some theories), but if you find yourself disconnected from your A13 after you update the system time, don’t panic, and don’t reboot your A13. Just connect to the console as root and type:

/etc/init.d/networking restart
/etc/init.d/ssh restart

That’s it, your A13 will now be available again via SSH.

Quick start guide – Building the Kernel

Fortunately for us, this task has been made infinitely simpler by the good guys on #arm-netbook via the sunxi-bsp project, which is an umbrella project, as well as a set of scripts, designed to bring everything you need in order to build the kernel, u-boot, and the script.bin file, as well as tools that help you hack around with the AllWinner hardware.

# Let's fetch the sunxi-bsp project into /usr/src:
cd /usr/src/
git clone git://

# Now we configure it for the A13 Olinuxino:
cd sunxi-bsp
./configure a13_olinuxino

# This time, make will build everything!

Once this is done, you’ll have the kernel, as well as the modules and u-boot under the /build/ directory. You now have the latest supported kernel (At the moment 3.0.52).

Important step: Enabling sun4i-gpio

If you own the A13-LCD7-TS hardware, you will notice it doesn’t power on under Debian. It took me a while to figure out the reason behind this. Turns out the backlight is powered by pin 15 of the GPIO port to which the LCD is hooked up. Android has a PWM (Pulse Width Modulation) component that pulses this pin in various frequencies, to give you different levels of brightness.

Since Debian does not have a PWM module, all we can do is power the pin, and the display will always be stuck on maximum brightness. That’s fine by me anyway.

To set that pin to the On mode, we need two things:

  1. Enable GPIO in script.bin, set pin 15 to be On by default, and update that file in your boot partition (same place where uImage is).
  2. When Debian loads, we need to load the sun4i-gpio module, so that the settings will actually be in effect. But first we’ll need to build it with the kernel, since it’s not enabled by default.

Let’s enable the GPIO Kernel modules:

cd /usr/src/sunxi-bsp/
make linux-config

This will get you into the standard “menuconfig” option, except it will edit the correct .config for your setup (in our case: a13_defconfig). Navigate to Device Drivers —> Misc Devices and enable the following features as Module <M>:

<M> An ugly sun4i gpio driver
<M> Sunxi platform register debug driver

Exit back and save the settings, then run make again:


This time, make sure you see sun4i-gpio.ko in the list of generated modules. If you see it there, you have enabled the correct modules.

A word of caution about the Paranoid Android…

During my trial and errors with trying to activate the LCD, I experimented with a pre-built kernel that I extracted from the built-in Android OS that came with the A13. The theory was that it might have a PWM module to power the LCD.

It booted the OS just fine, but I discovered that non-root users did not have permission to create Network File Descriptors. Through some more research on google, I discovered this is a Kernel config flag called CONFIG_ANDROID_PARANOID_NETWORK. If enabled, the kernel will require users to be on a group with gid 3003 to be allowed network access, and that’s just one of the restrictions. This broke the xf86-input-tslib module with a segmentation fault.

So my warning to you is: Do not enable CONFIG_ANDROID_PARANOID_NETWORK unless you are prepared to deal with the implications (making code modifications in several modules, and configuring the system very carefully).

Preparing the bootable Debian SD Card

Coming soon. In the mean time, consult this guide on the A13 Olinuxino Wiki Page

In fact, that page contains ready made images, along with various tools that will help you tweak the script.bin file to your hardware configuration (VGA vs. LCD for example).

Building tslib and xf86-input-tslib

In order to support the A13-LCD7-TS we have to use a patched tslib and xf86-input-tslib. The way I understand the patches, they are designed to work around the multi-touch limitation in the hardware.

1. First download the 3 patch files from this github directory and place them in /usr/src/

2. Perform the steps below to patch, compile and install tslib in your system:

cd /usr/src
git clone
cd tslib
patch -p1 < ../tslib.patch
autoreconf -vi
./configure --prefix=/usr/local
make install

At this stage, tslib will be installed on your system, and by running ldconfig we make sure that the libraries can be accessible immediately. Note that if you installed the libraries into a non-standard location, you should make sure to add that path to /etc/ and then run ldconfig again. To find out the touchscreen device number run “dmesg | grep sun4i-ts“. Sample output below:

root@debian:/usr/src/tslib# dmesg | grep sun4i-ts
[ 15.540000] sun4i-ts.c: sun4i_ts_init: start ...
[ 15.560000] sun4i-ts: tp_screen_size is 5 inch.
[ 15.560000] sun4i-ts: tp_regidity_level is 5.
[ 15.570000] sun4i-ts: tp_press_threshold_enable is 0.
[ 15.580000] sun4i-ts: rtp_sensitive_level is 15.
[ 15.590000] sun4i-ts: rtp_exchange_x_y_flag is 0.
[ 15.600000] sun4i-ts.c: sun4i_ts_probe: start...
[ 15.620000] input: sun4i-ts as /devices/platform/sun4i-ts/input/input1
[ 15.640000] sun4i-ts.c: sun4i_ts_probe: end

As we can see in the output above (on my system), the device number is input 1.

To run ts_test and ts_calibrate, as well as run X11 with tslib support, we need to export the following environment variables:

export TSLIB_FBDEVICE=/dev/fb0
export TSLIB_TSDEVICE=/dev/input/event1
export TSLIB_CALIBFILE=/etc/pointercal
export TSLIB_CONFFILE=/etc/ts.conf
export TSLIB_PLUGINDIR=/usr/local/lib/ts

Note that in the TSLIB_TSDEVICE environment variable, the “event1” part corresponds to the “input1” device that we saw earlier. If your device is on “input4” then you should replace “event1” with “event4“.

I have placed this block of exports in /etc/environment so they are all set on system boot. I also erased /etc/pointercal and reran ts_calibrate to re-generate that file. ts_calibrate will guide you through the calibration process, and when done it will write the new values into /etc/pointercal. You can then run ts_test to make sure you are happy with the calibration. ts_test will allow you to draw on the screen, or drag a small block around, which is great because it gives you a good feel for the accuracy and responsiveness of the touch screen.

By the way, one small “visual bug” I noticed is that ts_calibrate and ts_test do not clear the screen when they exit, but don’t worry about that.

3. Perform the steps below to patch and compile the xf86-xorg-tslib module:

# Let's get some X11 dependencies first
apt-get install xorg-dev xserver-xorg-dev x11proto-core-dev

# Now we fetch the module's source code and unpack it:
cd /usr/src
tar zxfv xf86-input-tslib_0.0.6.orig.tar.gz
cd xf86-input-tslib-0.0.6/

# Apply the patches
patch -p1 < ../1-xf86tslib-sigfault.patch
patch -p1 < ../xf86-input-tslib-port-ABI-12-r48.patch

# Finally, let's get this thing built and installed
./configure –prefix=/usr
make install

Now that we have the module compiled and installed, we need to tell X11 about the new device. We’ll do this by creating a new file under /usr/share/X11/xorg.conf.d/. I chose to call this file: 20-touchscreen.conf.

Here’s what my file looks like:

Section "InputClass"
        Identifier "Sun4iTouchscreen"
        Option "Device" "/dev/input/event1"
        Driver "tslib"
	Option "ScreenNumber" "0"
	Option "Rotate" "NONE"
	Option "Width" "800"
	Option "Height" "480"
	Option "SendCoreEvents" "yes"
	Option "Type" "touchscreen"

At this point you are ready to start X. Let’s start it with in Verbose mode so you can see the logging on screen:

X -verbose

If you did everything right, you should see a mouse cursor on the screen, and you should be able to properly click on screen items and even drag scroll bars to scroll around. Congratulations!

ApacheBench: Proper Usage

When you are benchmarking a web server, you may fall into the trap of benchmarking against a small file (maybe the debian default “It’s working!” index.html file). I decided to write about this pitfall, so that my friends & readers will get a more realistic benchmark.

I’ve found the following general guidelines are a good idea to follow when running a benchmark on a web server:

  1. You should benchmark with gzip enabled, since that will more realistically simulate what’s going on between most browsers and your web server.
  2. You should benchmark from another machine (preferably remote).
  3. You should benchmark against a real page on your site, not a very small test file (for example benching with robots.txt is a bad idea).

With Apache Bench (“ab”), you would do this by adding the following switch: -H “Accept-Encoding: gzip,deflate”

An example “ab” command would look like this:

ab -k -n 1000 -c 100 -H "Accept-Encoding: gzip,deflate" ""

This command will simulate 100 concurrent users, each one performing 10 requests for a total of 1000 requests. The -k switch will instruct Apache Bench to use KeepAlive.

If your sample web page weighs 250k uncompressed (just for the HTML body), that’s a lot of data to transfer between your web server and the machine from which you are performing the benchmark tests. The problem is that the network interface (or other transfer medium), will probably choke well before you achieve the maximum requests per second, and you may find yourself confused and spending time trying to tweak nginx or varnish, where in fact you are just hitting the limit of your network card or maybe even an enforced maximum speed policy by your hosting company (some might limit your network card at the Switch level, and some others will use Rate Limiting at the Firewall level).

But in the real world, it would probably be compressed anyway (especially if your config enables gzip, which in most cases it does by default). Such a page might be compressed to 10% of its original size (in this case to 25k). This will give you a much nicer and more realistic “Requests Per Second” number.

A proper, realistic benchmark should test a page that goes all the way through to the database and performs one or more queries. This allows you to make sure that you have proper caching enabled at the various levels of your web applications.

An important note about KeepAlive

With Apache: If you are experiencing very high traffic (many concurrent users), it may be a good idea to keep this value rather small. Say between 3 and 5 seconds in most reasonable cases. You do this so the Web Server can free worker threads for new connections, rather than keeping them alive with a user who may have already finished loading the page. If not enough threads are free for new connections, new users will have to wait for a while until such threads are free. If new users are coming at a pace faster than the pace in which threads are freed up, they will all be blocked and never reach your page. On the other hand, you may not want to turn KeepAlive off entirely, especially if you have a lot of images, css files, javascript files, all imported from your pages, because that would mean that for each such resource, your web server will have to spawn another worker thread, and that might cause a load spike if you experience a lot of traffic. So it’s a fine balance: You want it enabled, but you don’t want it to be too long, especially in the age of broadband, where a properly compressed page could finish transferring to the user in less than 3 seconds.

With Nginx: Threads in nginx work differently, so I personally feel that keepalive can be kept at 60 seconds with nginx, without cost to memory/thread pools.