tag:blog.nyaruka.com,2013:/posts The Nyaruka Blog 2023-04-18T15:47:55Z Nyaruka tag:blog.nyaruka.com,2013:Post/873463 2015-06-24T19:01:15Z 2021-01-23T10:35:49Z Polio, Meet Mobile: Targeting the Final 1% of Global Polio Cases with TextIt

Here’s a story from the field spotlighting a RapidPro implementation that we proudly maintain for UNICEF. As we’ve discussed before, RapidPro is the Open Source software platform that powers TextIt.

Polio is at the brink. The total number of global polio cases has decreased by 99% since 1988.

For UNICEF Polio Innovation Lead Asch Harwood and his colleagues, that isn’t good enough. Whereas the Global Polio Eradication Initiative (GPEI), of which UNICEF is a key partner, has been successful in eradicating the overwhelming majority of global polio cases, the final 1% has proven elusive. “We’ve reached a point where current methods are no longer working. We need to think outside the box.”

Polio flourishes in areas where awareness of and access to vaccination are scarce. These are communities in which literacy rates and income levels are low, and access to technology is limited. The average person in these communities might not be aware of vaccination services or might feel pressure from local leaders to avoid vaccination.

Today, polio circulates in Afghanistan and Pakistan. In Nigeria, the only other country where polio is endemic, the total number of polio cases is 6 as of 2015 though health authorities have not detected a new case of polio since 2014. Nevertheless, polio eradication can only be certified once the afflicted region demonstrates that transmission has been blocked for at least 3 consecutive years.

How can these people be reached, and what will enable that communication?

This is the prompt Asch Harwood and the polio communication team at UNICEF are attempting to answer. “It’s UNICEF’s job to make sure that people understand why vaccinations are important,” says Asch.

To reach the remaining 1% of cases, Asch and his colleagues are focusing their efforts on creating demand for the polio vaccine in those communities most at risk. In Afghanistan and Pakistan, this means working within potentially dangerous areas.

To reach these populations, polio communication teams engage in social mobilization, a communication for development (C4D) approach that seeks to change behavior by motivating a wide range of players to raise awareness of and demand for a particular development goal through dialogue.

Going into this project, Asch recognized the potential of mobile phones as a point of connection with people in communities most at risk for polio infection. After researching and testing various services that would allow him to build SMS and interactive voice response (IVR) applications to establish automated bidirectional communication at scale, Asch and the polio C4D team settled on RapidPro.

Asch points to user experience as the key factor that led the polio team to adopt RapidPro: “I’ve played with a lot of the other IVR tools, and you guys have definitely designed the best user experience I’ve seen so far.”

As an innovation lead, ease of use allows Asch to bring new ideas to life. “Probably the most powerful part of it is that because I was able to demonstrate it, I was able to get the buy-in necessary to help us start prototyping and piloting.”

The RapidPro platform allows Asch and his colleagues to build flowcharts or Flows that disseminate and collect actionable information geared toward improving vaccination awareness via a combination of SMS and interactive voice response (IVR). Asch chose to use IVR because the information he desires to disseminate exceeds the 160 character SMS format.

For example, registered social mobilizers in target communities might receive an automated phone call from UNICEF’s team announcing a survey aimed at better preparing them for their jobs. The automated voice recording will ask a question in Urdu — in this case, a quiz question about the symptoms of polio — and then ask the user to press the number that corresponds with their response. If the user answers correctly, the Flow will provide additional information before moving to the next question. If the user answers incorrectly, the Flow will respond with the correct answer and provide an explanation before moving forward.

The goal is to equip registered social mobilizers with the information they need to affect harmful discourse and raise the awareness necessary to increase the rate of vaccination within their communities.

The polio unit’s RapidPro deployment integrates with Twilio’s Global Reach to make international calls. The success rate of these calls is cause for optimism: In an analysis of 7,000 calls made through Twilio connections to social mobilizers all over Pakistan, it was found that only 7% failed to connect. This indicates that mobile network coverage in Pakistan is extensive.

The project currently reaches upwards of 2,500 social mobilizers within Afghanistan and Pakistan. In the coming weeks, UNICEF plans to increase that number to 7,000, focusing on females in high-risk areas. Asch and his colleagues expect IVR to be particularly powerful in low-literacy settlements.

Next, Asch will be in Nigeria working closely with the Nigeria polio communications team to begin prototyping RapidPro applications with social mobilizers in that country.

To learn more about TextIt, visit our Learning Center, review our documentation, or watch this short video:

Sign up for a TextIt account today to start building your own SMS or IVR application. In keeping with our goal of fostering development, we provide 1,000 complimentary credits to every new account, as well as country-specific guides to integrating with local carriers and international gateways.

We also integrate with Twitter, allowing users to take advantage of wifi and data plans to send Direct Messages to other Twitter accounts.


]]>
Kellan Alexander
tag:blog.nyaruka.com,2013:Post/599227 2013-09-05T14:57:54Z 2013-10-08T17:29:26Z Learning from the Past - SMS in mHealth in Practice
I am probably late to the party, but I just discovered the fantastic set of journal articles "mHealth in Practice" edited by Jonathan Donner and Patricia Mechal. There are far too few such resources available for Mobile for Health practitioners, so it was with great interest that I read through the various case studies looking for lessons learned.

Since one of our goals with TextIt was to allow for these kinds of projects to be easily built and iterated on, I found it really interesting to read the articles while also considering which could be implemented with a standard TextIt installation. Technology is only a small piece of making these programs sustainable, but we believe that by making it easier for organizations to experiment with a tool like TextIt we will accelerate the rate of finding effective solutions that are appropriate to each unique context.

One of my favorite case studies was using SMS Quizzes to increase HIV awareness and knowledge. This style of intervention is near and dear to our heart, both because it was largely pioneered by TextToChange in nearby Uganda and because it is such an easy and clear way of helping educate a population on a public health issue. In this campaign, simple questions are sent out to the population, things like "Do you think a healthy person could have HIV?" and the users are prompted to respond. Based on their response they are educated further on the issue, either reinforcing their knowledge or being taught what the right answer should be. These sort of educational surveys are interesting for two reasons: one, because they provide some measure of the current state of the population when it comes to understanding various issues, and two, because the system corrects incorrect responses it also instantly educates participants.

Combined with even mild prizes, this is a low cost way to spread information and raise awareness. It also happens to be the sort of application that fits right in the sweet spot for TextIt. Building an SMS quiz is the same as building a flow, except you have a lot more options on how to handle results with a Flow. For example, on incorrect responses you could explain the right answer, but then also offer to find out more by responding with 'more', otherwise moving them onto the next question. Combined with slotting users in different groups based on their responses, potentially for further education, you could very easily build a very effective way to reach large audiences.

We think the flexibility of flows, and the richness of the interactions they allow, will really allow for a lot more experimentation on this approach. One thing we know is that every context is different, every intervention unique, but we know that some approaches have promise and we hope that TextIt allows for quicker iteration to find the right fit for your environment.

In future posts we'll talk about some of the other case studies and whether TextIt can help implement them, but if you have your own experiences, good or bad, please share. I also highly recommend reading the full set of articles, there are lots of gems in there.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/361985 2013-04-09T07:20:19Z 2020-12-14T20:31:31Z Bitcoin's Bottom Billion - Why the Developing World May Be Bitcoin's Biggest Customer

Bitcoin is a bubble.

There, now that I've got all the skeptics on my side, we can move forward. There are any number of arguments as to whether Bitcoin is currently under or over valued, but in the end nobody really knows, so let's just leave that be. As a matter of fact I'm going to ignore the recent Bitcoin volatility completely and instead focus on a side of Bitcoin that doesn't get covered much, its use as a digital currency in the developing world.

Bitcoin is almost always billed as a replacement for credit cards and PayPal, which is a shame because on that front I think Bitcoin mostly loses: transactions can be slow to settle, Bitcoin security makes everybody paranoid, and most importantly there are very few places to actually spend said Bitcoins when compared to credit cards. Let's face it, for those that can get credit cards, they work incredibly, incredibly well, but that still leaves the other half of the world unaddressed.

For example here in Rwanda, virtually no one has a Visa or Mastercard. A few banks offer Visa cards, but they come with high monthly fees or minimum deposits. Rwanda, like its neighbors, is very much a cash society, which means that most digital goods are out of reach, not because they aren't affordable, but simply because most Rwandans don't have credit cards.

A case in point: hosting and domain names. At the kLab, a software innovation hub here, we work with tenants who struggle with making the jump from having a site running on localhost to one running on the web. Right now most muddle through by using free tiers on AppFog or Heroku, but once they need more than that they get stuck. There are a few enterprising souls who have gotten Visa cards and resell hosting, but the really excellent services remain inaccessible.

Step in Bitcoin and that is largely solved. Today, right now, I could sell a Bitcoin to someone in the kLab for cash and have them go buy hosting or a domain on Namecheap, without having to apply to a bank or fill out any forms. It really is digital cash, and in societies where cash is the norm, I can't help to think that it will be a more natural leap than credit cards. All it would take to enable a whole new class of consumers is for the countless foreign exchange bureaus in Kigali to start exchanging Bitcoins.

Now of course East Africa is already famous for having lightweight digital payments, the most famous being M-Pesa in Kenya. But here I think we start to see why Bitcoin's decentralized nature is such a plus. M-Pesa has some serious transaction fees, really only works in Kenya, and most importantly doesn't really facilitate participating in the global marketplace. As much as I love M-Pesa, I don't think we'll be seeing Namecheap accept it anytime soon. Even regionally it is a struggle, while Kenya, Tanzania and Uganda are all neighbors and the largest mobile money markets in Africa, their systems aren't interoperable.

In that sense Bitcoin really is "The World"'s digital currency. Foreign exchange bureaus across the globe could start selling Bitcoins and by doing so enable anybody to buy digital services. 

But that is only half the story, perhaps the most exciting part of Bitcoins is how it enables merchants.

If you think getting a Visa card is hard in Rwanda, getting a merchant account so you can accept Visa payments is a whole different story. And again, here Bitcoin shines. Someone in Rwanda that builds a compelling service can instantly start taking payments from the rest of the world, without asking for permission, without filling out any paperwork and with the same fee structure as the biggest retailers.

So Bitcoin is exciting to me not so much because it is a new currency, but because it has the potential to be a globally recognized, yet completely decentralized, form of digital payment.  

That depends on Bitcoin adoption continuing to rise and for the price to stabilize, both still very tenuous suppositions, but the potential is there. A lot has been said about how we all live in a global economy, that the things we use every day come from all over the world. That may be true, but we don't yet live in a DIGITAL global economy, only the rich get to participate in online commerce. Bitcoins may just change that.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86035 2012-11-08T11:18:00Z 2013-10-08T15:39:51Z Lagom - Better than Best

A recent article by Dustin Curtis made an interesting case for seeking out the best in everything you own, that your quality of life will be higher surrounded by perfection. The interesting bit to me is not so much that thesis, but that it frames that attitude as something new.  Rather, I'd make the case that it is rather typical, especially so in the community of geeks that Dustin belongs to.  We are almost obsessive in our compulsion to seek out the best, to research things ad infinitum and to bask in the satisfaction of knowing that a particular widget is the best made, most thoughtfully designed, and carefully constructed.

The thing is, seeking out "The Best" isn't anything new, it is just Consumerism taken to the extreme.  We love consuming, and we especially love consuming in ways that set us apart, doubly so if we can rationalize that consumption.  Any proud owner of the latest widget will be all too happy to tell you why his widget is better than the others, better than the last and perhaps even the final word in widgets.

In my early twenties I was lucky enough to spend a bit of time in Sweden. I spent only a week there, but in that time I got introduced to the Swedish word "lagom".  There is no English equivalent to lagom, but Wikipedia calls it "just the right amount", that what you have is good, but not excessively so. Something "lagom" meets your needs, leaves you content, but doesn't try to do so in extreme.  As put by the Lexin dictionary, that "Enough is as good as a feast".

Lagom specifically eschews excess, but strives to be good, not junky or disposable.  If you have visited IKEA, which is of course Swedish, you have probably seen an awful lot of lagom.  Though you've also seen an awful lot of disposable things too, even IKEA has a hard time finding the line that is lagom.

So while "The Best" flatware may be $200 at MOMA for a sitting of four, the lagom choice might be $13 at IKEA. They will both last you a lifetime, but perhaps only the first has the perfect angle to the fork tines as it hits your tongue.  I may never know.

To be very clear, Lagom does not mean cheap in quality or price, it is good but content to not be perfect.  In essence it is recognizing the 90/10 rule, that the first 90% of anything can be achieved fairly easily, but beyond that it becomes exponentially harder and less efficient to make improvements.  Seeking out perfection personally is part of life, but as a matter of consumption, it is a wasteful goal, and one that the marketplace is far too willing to indulge.

This particular trait, of cherishing the best, of making sure to be prepared for anything, strikes me as very American. In Kigali, it is usually easy to spot the American tourist: safari pants with lots of pockets, travel wallet and an Osprey pack, all of highest quality, best at what they do. The Europeans tend to be more casual, in jeans with well weathered rucksacks.

So instead of striving for the best in what I purchase, I tend to try to find the lagom instead.  I must say for me personally, that is incredibly hard to do. My natural tendency is very much more to be like Dustin, to research like mad and have a veritable bible of information to back my decision.

But in the end, though we have to consume things, we must not be consumed by them, and to spend days researching the best camera, fork, computer or pocket knife is just that, being consumed by them.  

We only have so much time here, let us spend it creating, sharing, exploring and risking instead.  Let us keep our consumptions lagom, so that instead we can make the rest of our life "The Best".

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86036 2012-10-04T10:37:00Z 2020-12-14T20:31:51Z Bitcoin's Biggest Problem: It's a Den of Thieves

I've had a curious eye on Bitcoin for a few years now.  Through various posts on HN I've followed its ups and downs including the big hoopla over the exchange rate for a Bitcoin reaching parity with the dollar, a figure that is now eclipsed by today's going rate of $12/BC.

For those that haven't paid similar attention, here's how Wikipedia describes Bitcoin:

Bitcoin is a peer-to-peer electronic currency .. which can be sent, received and managed through various independent websites, PC clients and mobile device software.

At its roots, Bitcoin is based on some rather clever mathematics, ironically the same primitives that keep our credit cards secure when buying things online.  But while credit cards are centralized systems, with Visa, Mastercard and friends acting as the managers of who has what, Bitcoin is more clever.  It is a truly distributed system, everybody knows everything, all the Bitcoin clients forming a congress of sorts on what the state of the world's Bitcoin economy actually is.

This has some very neat advantages, the largest being that no one entity controls Bitcoin, instead it is a community all agreeing to use a currency with particular rules, with some clever algorithms keeping everybody honest.  No single government, much less corporation, can dictate what happens to Bitcoins.

Another great advantage is that it also allows for almost complete anonymity, while your Bitcoin 'wallet address' is public, as are the transactions that go in and out of it, nobody really knows who that address belongs to.  Since it is incredibly easy to set up digital bitcoin wallets, it is easy to launder bitcoins as well, masking the sources and destinations of transactions.

These two great strengths: no central authority, and complete anonimity, unfortunately also create the perfect environment for fraud.  Since there is no central authority able to reverse transactions, and since there is no way to track down someone you send Bitcoins to, the Bitcoin community sometimes seeems like a den of thieves.

One need only look at the long list of failed attempts to allow the purchase of Bitcoins through PayPal or credit cards to have this point driven home.  Since both credit cards and PayPal allow reversal of charges, it is easy for crooks to buy Bitcoins, have them deposited to their anonymous wallet, then have the charges reversed.

This fact makes acquiring Bitcoins a rather obnoxious affair, with one party or the other having to give tremendous trust to the other.  I recently tried to buy a moderate number of Bitcoins on one of the trading facilitators, Bitcoinary, and ran into this head on.  After agreeing on a price, I sent over the money for the Bitcoins over PayPal and was immediately faced with a seller getting cold feet, nervous that I would reverse the charges after he sent the Bitcoins over.  No amount of convincing worked to assuage his fears, offers to call, online reputation, etc.. Despite offering to forego far more privacy than any consumer ever would, he refused to complete the transaction, the deal finally falling through.

This is not a good sign for Bitcoin.

Though Bitcoin is a fine digital currency, it is hard seeing it take off for consumers the same way credit cards have.  While centralized payment systems like credit cards have their disadvantages, they offer one very large advantage as well: they act as an arbiter in the case of disputes.  The buyer and seller both agree that "Papa Visa" knows best and with that confidence consumers open their wallets.

Bitcoin has none of that.  Yes it is decentralized, yes it is amazing to see payments go through so quickly and so cheaply, but what happens when it goes wrong?  I can trust tiny merchants with my credit card because I know if something goes wrong I have a recourse, there is no such recourse with Bitcoin.

So we are back to the same trust issue I ran into on Bitcoinary, a chicken and egg problem. Customers will be leery to spend with new merchants without a reputation, and new merchants will find it hard to build that reputation without customers.

Note that I did eventually find a seller on Bitcoinary who would trust me, so it isn't completely hopeless, just a lot lot harder than it should be.

As has been pointed out numerous times, "trust enables people to do business with one another". Trust creates an environment where everybody can stop worrying about being ripped off and get on with business.  A currency that allows complete anonymity yet has no central authority to manage disputes fails this trust test. That single fact is why I don't see Bitcoin taking off long term in its current form.

It might not be completely hopeless. One way to manage situations where neither party trusts the other is to have a third party in between which is trusted by both: escrow. There is no reason Bitcoin couldn't support such a party in its ecosystem, built on Bitcoin, but adding trust to the equation. With that will come overhead however, and I'd be surprised if we didn't find ourselves back to the 3% transaction fees so common in the credit card world.

In short, while Bitcoin is irresistable crack to software geeks for its sheer cleverness, I just don't see it having the right attributes to break out into a larger world of consumer spending.  Either way it will be an amazing thing to watch grow and evolve.  Who knows, maybe, just maybe I'll someday be able to find someone willing to give me dollars back for my Bitcoins without losing my shirt in the process.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86037 2012-02-25T14:19:00Z 2020-12-14T20:37:23Z How Udacity's Greatest Effect will be in the Developing World

As those of you who follow such things will tell you, the world of online teaching has been heating up lately when it comes to computer science.  New courses are coming online at a greater and greater pace, and their quality is improving by leaps and bounds.

One of the pioneers here was MIT with their OpenCourseWare. Their Introduction to Computer Science course not only included all the class materials, but also video lectures, allowing anybody with a computer and internet connection to take the same course taught in the #1 Computer Science school in the world. But it is a tough course, moving very quickly, and the videos are just of the lectures, impersonal, not terribly engaging.

Kahn Academy, in some ways the grand daddy of alternative online learning and pioneer of the 'first person doodle' style of teaching, started their Python Lessons this summer.  This provided a very different approach to computer science, much less formal, much more personal, you feel a certain connection to the teacher.  But it still required you to download and install a Python interpreter and editor, a small barrier, but a barrier nonetheless.

Meanwhile, a little revolution was taking place with Codecademy, which took the wholly different approach of teaching by doing, immediately throwing students into a programming environment built right into the web page. Codecademy's courses provide no videos, no lecture notes, rather they are programs themselves, guiding you through principles by having you code.  This was something entirely new, though some credit belongs to Stanford's CS101 course which also included web page programming, albeit in not as sophisticated a fashion.

This brings us to Udacity, which takes all the best parts of the above approaches and marries them into an incredible teaching tool.  Udacity combines the personal, approachable first person teaching style of Kahn Academy, but then backs it up with interactive programming in Python, all right in the browser.  

The teachers are ex-Stanford professors, so they have decades of experience teaching this material, which really shows in how they present it. So far in the first week of class, they have done a great job of covering fundamentals without getting bogged down in details, getting students to start learning intuitively, by doing, while still giving them the founding blocks to know why things work the way they do.

Perhaps most importantly, Udacity has structured their CS101 course around a brilliant concept, building a search engine in eight weeks. That single act makes the course not about learning, but about doing. The class never has to answer the question 'why are we doing this?', because each topic is directly tied to the overall goal of building your own little Google, every piece is practical.

To me, this marks the first time where online learning not only matches, but actually exceeds the classroom equivelant.  Each student can work at their own pace, and if the material continues to be as well thought out, each student will succeed.

And that's available to anyone, anywhere in the world, for free. Absolutely Incredible (tm).

To me, having lived in Rwanda for a couple years now and having my reservations about the quality of the CS curriculums in the region, Udacity is a revolution.

Suddenly, the very best education is available to everyone. Suddenly it doesn't matter if you live in America or Rwanda, the opportunity is yours. And that's why I think the greatest effect of Udacity will be felt not in America, not in Europe, but in developing countries like Rwanda.  Because the improvement in quality over what is offered here is astronomical.

I fully expect that everybody who finishes the eight week Udacity course will be better prepared than those who finish four year university programs in Rwanda.  And that's not unique to Rwanda.  Every developing country suddenly got a world class computer science school donated to them.

The effect that will have remains to be seen, but I think this is the start of something much, much bigger. The printing press brought us affordable books, driving a renaissance in learning across the world.  The internet has done the same, bringing information access and instant communication to virtually every corner of the globe.

Looking fifty years into the future, I think it is clear we are on the cusp of yet another revolution, in learning, where not only will incredible educational programs be available to all, but that those programs will be superior to every teaching tool that came before them. The consequences of that, of having an entire world educated at that level, is beyond imagination.

I often marvel at the luck I've had to live during this era. To be present for the birth of the internet, to programming languages, to see them grow and evolve, but I have a feeling we haven't seen anything yet.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86038 2012-01-27T09:16:00Z 2020-12-14T20:37:36Z Customizing Twitter Bootstrap's Nav Bar Color in 2.0

Twitter Bootstrap is the bee's knees.  At Nyaruka we've used it for virtually every single project since it arrived on the scene.  And we aren't the only ones.  Sites across the web have picked it up as a great tool to get functional, good looking designs up quickly.

But there's one downside to the popularity of Bootstrap, it is all too easy to have your site look like every other Bootstrap site.  To my eyes, the biggest giveaway that someone is using Bootstrap is the black nav bar.  It seems like the majority of sites use the default black nav bar, and that screams lazy.

We almost always customize the top bar color for our sites, and though they are still obviously Bootstrap to a trained eye, they add a bit of personality.  Whether it's a little green for our work with TechnoServe on their Coffee Transparency site, or showing off our company colors for our Ministry of Health feedback site, a little goes a long way.

Of course there's a reason not that many people change it.  That black nav bar isn't so bad looking, and Bootstrap doesn't make it particularly easy to change.  It can be especially annoying if you are using menus, having to track down all the variables needed in that fancy CSS takes some patience.

Thankfully, the upcoming 2.0 version (due out January 31st) introduces a couple variables to make things easier, though you'll still have to do a bit of hackery to deal with drop down menus.

Bootstrap is built using Less, so the first step is to get set up with that environment, Twitter gives some nice tips on getting that up and running on different platforms in their documentation.

In 2.0, Twitter has defined two new variables in the patterns.css file, @navBarBgStart and @navBarBgEnd.  These, quite naturally, define what the top and bottom colors are for the nav bar gradient.  Though you can set these to whatever you'd like, if you want to keep things simple I recommend using dark colors, so that the default light text colors work without modifications.

Ok, so let's try this out with a nice royal purple:

@navBarBgStart: #3D368B;
@navBarBgEnd: #232051;

Once we recompile the CSS file using Less, we get something majestic indeed:

Well that was easy right?  Not so fast, it turns out if we want to have drop-down menus, there's more to it.  Let's check out what that drop down looks like:

Shoot, well that isn't right.  It turns out Bootstrap 2.0 doesn't make changing those colors any easier, so that's something we have to dig a bit deeper to change.

We'll need to change both the background color for the dropdown, and the gradient used to draw the selection highlight.

The first can be set by changing the background-color in the .dropdown-menu style, somewhere abouts line 180 in patterns.css.  By default this is set to #333.  I recommend setting this to a value somewhere between the two gradient values you set for your top bar:

.dropdown-menu {
     background-color: #2F2A6C; // defaults to #333

The second change we have to make is how the active item is highlighted.  This is another gradient, and is under the styling for li a, somewhere around line 192 in patterns.css:

li a {
      color: #999;
      text-shadow: 0 1px 0 rgba(0,0,0,.5);
      &:hover {
        #gradient > .vertical(@navBarBgEnd, #1B183E);
        color: @white;
      }
    }
Once we recompile our CSS file, we get something that looks pretty decent:

There, now we have a good looking nav bar that doesn't scream Twitter Bootstrap, all with just a few lines of code.
]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86039 2011-12-14T12:37:00Z 2020-12-14T20:37:56Z Bringing Minister Mondays to SMS

One of the new exciting initiatives happening in Rwanda is Minister Mondays.  A few times a month, the Minister of Health, Dr Agnes Binagwaho goes on Twitter and answers questions from the general public.  She answers questions from a wide range of topics, this week's focusing on nutrition.  That kind of access to leadership and transparency is to be applauded and is something pretty unique, not just in Rwanda but in the world.

But one thing we thought could be better was that the primary form of interaction was via Twitter.  Though it is a great platform for reaching a digital audience, it does leave out quite a few participants in a country where the penetration of PCs (and Twitter) is still low

As we specialize in building SMS services, we decided to volunteer a bit of time to see if we could bridge that divide.  Last week we met with the Minister and discussed what could be done.  Happily we found that the Minister had the same goals as we did, trying to reach a wider audience, with a focus on transparency, on putting those questions and answers on the public record.

A few days later and we had put together Nyaruka Listen, a platform to make that kind of interaction as easy as possible.  Since we were on a tight schedule and wanted to make the service available across carriers, we decided to leverage our Android relayer to deliver messages to our service.  That let us quickly use a regular SIM card to relay messages from the MTN network and back.

The final service is simple.  Incoming messages to the Minister Monday phone number are ingested by Listen, the sender receiving an SMS confirming that it was well received.  Meanwhile, the Minister can see all incoming messages on a web dashboard and can instantly reply using a simple interface.  That reply is sent back to the original sender's phone and made public for the world to see.

It is a simple product, but one we think could have real potential for other organizations.  The penetration of mobile phones in Rwanda and other East African countries is high and only getting higher and SMS is a cheap and reliable technology that people use on a daily basis.

Using Listen, an organization can set up a hotline where their customers or beneficiaries can send in comments and concerns via SMS.  Each message is viewable by everyone in the organization and can be dealt with appropriately.  Forward looking organizations can even follow Dr Agnes's model and be transparent, publishing the responses for the public to see.

Of course this kind of system might seem like it would be expensive but it not need be.  We believe so strongly that this would be beneficial to other organizations that we're willing to provide Listen to any organization for a really compelling price.  You too could give your community a voice and the power of easily interacting with your organization via SMS.

If you are interested in exploring that further make sure to contact us to find out more.  But either way, we hope to see you interacting with the Minister during the next Minister Monday session in early January.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86040 2011-09-22T07:27:02Z 2020-12-14T20:38:04Z Rwanda's new Short Code Registration Procedures

A few weeks ago, RURA, Rwanda's Utilities Regulatory Agency, announced the new procedures for acquiring short codes in the country.  Short codes are just that, short (four digit for our purposes) phone numbers which allow companies and organizations to build services which are more easily accessible to the general public.  Instead of having to dial a 10 digit number, a user can instead use a shorter four digit number.

Short codes also serve the purpose of allowing services to be deployed across carriers.  To build a reliable SMS service, such as the ones Nyaruka builds, you need to integrate with the carrier's SMSC.  Each carrier has it's own SMSC, and that's where SMS'es get routed on their network, or alternatively crosses to another network.  If you want to build an SMS service that works well on all networks, you need to integrate with each of these, and you need a shortcode to make it easy for the customer to access them all.

Old Procedures

In Rwanda thus far, acquiring short codes has been a pretty easy affair.  You just put in a request to RURA for a new shortcode, along with your intended use and a week or two later they reserve a new four digit number for you.  This worked but posed some problems, both for us as customers of RURA and for RURA themselves.  

For RURA, because short codes had no cost to them, there wasn't much incentive for companies to use them wisely, or even wait until they had a business plan to acquire them.  Worse yet, with no maintenance fee, short codes would often lay stagnant, unused, but still reserved and unavailable to use.  That's a real problem when there are only 10,000 short codes available.

For companies, RURA didn't have any procedures for us to pick our own short codes.  Instead, you would include seven or eight different short codes you'd be happy with in your request, but more often than not you were just assigned a random shortcode.  Not a great outcome if you are trying to build a consumer facing service.

New Procedures

The new procedures that RURA announced should go a long way towards solving both of these issues.  But before getting into that, I must first commend RURA for making both the presentation and the application form easily available on their site, this is great to see and shows a real willingness to operate openly.  Even more importantly, I was also really impressed with how they presented the new plan, inviting all the stakeholders to a meeting and allowing for lots of feedback and discussion.  Very well done.

The main change, as expected, are that short codes will no longer be free, but instead require a yearly maintenance fee.  This is a good change, as it now rightly discourages squatters from taking over short codes.  In addition, it now means that you will be able to request a specific short code and if free, receive it.

The new fee structure is 25,000 RWFs as an application fee, then either a $200 USD or $1,000 USD yearly maintenance fee for the short code, depending on what category it falls in.  $200 short codes are reserved for what RURA calls "Customer Information Services" and here I think they will run into trouble, as it is not immediately clear what falls under that category.

Although not in the slides, further discussion with RURA at the meeting leads us to believe that RURA is reserving the $200 fee for short codes which are less desirable and used for non-profit purposes.  But again, this is a hard thing to define, there is no bright line rule that works well, especially as more and more SMS services are trying to support their programs by deploying premium or normal rated SMS services.

Our Comments

Although we are fine with the plan as presented, I think a better option would have been to just define different classes of short codes based on their desirability, and charge differently based on those, regardless of purpose.  For example, RURA could have said any short code with only one digit, say: "1111" would be $2,000 a year, those with two pairs or consecutive digits, say: "1234" or "2020" would be $1,000 a year.  Those with two different digits, say "1112" would be $500, and all others would be $200.  

That kind of fee structure would allow small businesses to still acquire short codes at reasonable prices when first starting up, and later graduate to a nicer shortcode.  It also removes the very subjective measure of what is a "Customer Information Service" and instead makes the pricing completely transparent, which is always a good thing.

But even if it isn't quite perfect in our view, the new procedure is much better than before, and having a yearly maintenance fee should go a long way towards freeing up desirable short codes which aren't in active use.

Online Registration

One of the things that was both good and disheartening to hear was that RURA was also planning to launch an online registration system for short codes.  This is great news, as it is one thing that really doesn't work well currently.  In order to be able to pick a short code, you really want an up to date website which lists all the short codes already taken, letting you weigh the pros and cons of those still available.  So having an online database and registration form is a huge step towards making the process both streamlined and more useful for customers.

The disappointment comes in that Nyaruka built such a system specifically for RURA and presented it to them back in May of this year.  We had experienced a lot of the pains in the old paper procedure for short code registration, so we decided to build something better.  Emile spent about a month thinking through the problem and building the site, and we presented it to RURA, offering it for free save for any further customizations and a small hosting fee.

RURA seems to have decided to go their own way on this, as is certainly their right, but it is a bit disheartening to see.  We haven't seen the system that RURA itself will deploy, but I hope it does a better job than ours, and I hope it launches before the November 30th deadline for applications for existing shortcodes.  

Speaking of which, here's the demo video we put together for that system.  Built, designed, and coded in Rwanda by a Rwandan.  Even if it never gets used, I'm proud of the work Emile did in thinking through the problem and implementing it.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86041 2011-09-01T08:14:00Z 2023-04-18T15:47:55Z Learning to Swim by Reading a Book - The State of CS Education in Rwanda

We recently decided to hire an intern at Nyaruka.  We thought it might be a nice way to give back to the community, to help a student along for a few months, give them a taste of working at a software company.  The internship periods were starting soon, so we quickly put together an application form and started advertising for applicants.

The response was excellent.  In less than a week, we got over 50 applicants from various universities.  Our application was mostly focused on coding questions, very simple things like writing a function to return whether a number is odd or even and simple string manipulations.  We are strong believers that coding ability comes first, far before degrees or certificates, so our focus was there.

The applications we got back were varied.  Some obviously cheated on their answers, clearly copy/pasting from somewhere on the internet despite our warnings not to use external references.  Others gave it their best shot, honestly trying but getting few answers correct.  But about eight applicants looked promising, and from those we picked the four which we thought did the best and asked them to come in for a follow up interview.

And here we learned just how badly the Rwandan universities are failing them.

Our interview question for the applicants was a simple one: given a string such as "AABAB", return the string with all consecutive duplicate letters removed, ergo: "ABAB".  This is the type of problem one might be given as a first assignment in Computer Science, something that uses only the most basic fundamentals and constructs.  But our applicants struggled with it, far more than is reasonable.

Now keep in mind that although this is an internship, all these students were three, four, or even five years into their Computer Science programs.  By now they should have a reasonable grasp of programming, debugging etc..  If asked they will tell you their curriculum includes the same titles that any Western CS program might have, Operating Systems, Algorithms etc.. but despite their impressive names, it is clear the universities aren't teaching their students much at all.

This angers me.

It makes me mad at the professors teaching courses they aren't qualified to teach and mad at the universities charging students in both time and money and yet not fulfilling their promise.

The problem stems to their methods in teaching Computer Science.  Universities here are still teaching their students using pen and paper, teaching from books instead of computer screens, teaching by lecturing instead of by doing.  That simply does not work for this field, you cannot learn how to program by reading a book.

Computer Science isn't unique in this respect, but sometimes this is hard for people who aren't coders to understand, so an analogy might help.

Imagine you take a course to learn how to swim.  Not just a short course, but a dedicated university program on swimming.  Over the course of years, you attend classes where you learn the names and characteristics of various bodies of water you can swim in.  You listen to lectures about buoyancy, take exams on rip tides, diagram the various swimming strokes.  Your professors, while unable to swim themselves, assure you that they have also completed this certification and have read many books on the subject.  So after years of study, years of applying yourself according to the plans of the university and hours upon hours in the classroom, you graduate with your certificate.

To celebrate, you go to the beach, where you quickly drown.  Because, in truth, despite your years of studying, you have no idea how to swim.  You've never been in the water.

This is the state of Computer Science in Rwanda (and really the region in general).  Despite impressive graduation numbers, despite the glowing press, despite the undisputed progress, there is a shocking failure to actually teach any of the required skills to be a programmer.

This must stop.  This must change if Rwanda is at all serious about trying to build an IT sector.

The universities must take a hard look at their professors and their curriculums and redesign them to be more effective.  There isn't a huge pool of qualified professors to draw upon, but that doesn't make it hopeless.  Both MIT and Stanford, provide many of their class materials online.  Not only the lecture notes and assignment, but videos of the lectures themselves.  Why not build courses around these?  I would guess that small groups of students assigned to work through these courses together would learn far more than they are now.

To give you an idea, this single MIT course: Introduction to Computer Science and Programming, would provide an understanding of CS fundamentals that far exceeds what is taught in the four year programs at the Rwandan universities.  A single course!  Why can't KIST professors help students through such a course, playing the video lectures, working through the assignments with the students?  Why isn't this a model that is adopted?

My advice to anybody in Rwanda who wants to learn Computer Science is this: Don't attend University.

Instead, invest the money in a laptop and a modem.  Download and start working your way through the MIT course, find friends to do it with, help each other along.  Do the assignments, write the code.  It will be hard at times, you will struggle, but it is the only way to learn.  When you do complete that course, which should take less than a year, you will have skills that far exceed any of your colleagues who attended University.  And with those skills you'll be able to find work doing real programming.  As a matter of fact, come to us and show us you did the work, and we'll help you along, certificate or not.

To the Rwandan Government, I plead with you to get serious and start talking to people already here on how you can do better.  Some programs already exist which are successfully training students, PIH's Medical Informatics Course being the prime example.  But their scale is far too small, their focus too specific.  Start finding ways of starting similar programs or scaling the successful ones up.   Look at things like Nairobi's iHub as inspiration on how to build the space and community which can help foster an IT community.  What if Kigali had an iHub where students could come and work through the MIT course together?  Imagine the possibilities if you encouraged a community of students and professionals alike to collaborate in learning?

None of this is expensive, none of this is hard, but the current institutions are failing their students, and in turn, their country in accomplishing Rwanda's goal of building an IT sector.  The first step is admitting that what is being done is not working, that the current graduates are not qualified, that the education system is failing them.

Once we can admit that, then we can all work together to do a better job.  

We're here because we believe in that dream and because we want to help, so are many others.  Let's work together to make it happen.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86042 2011-06-29T11:53:00Z 2020-12-14T20:38:12Z 2D Barcode Error Correction


We're still playing with 2D barcodes and testing out how well they work with various Android devices.  One question that came up from our last test was how well they dealt with errors in the barcode, that is dirt, rips or parts missing entirely in the way.

The answer is pretty darn well.  Here's a barcode that still runs despite me scribbling all over it:

But there are vulnerabilities.  Specifically on QR codes, they are very sensitive to having one of the position detection patterns destroyed.  This QR code, while in much better shape than the one above, refused to read, just because the top left block was filled in.

Note of course that all these results are in the context of using the Barcode Scanner application on an Android phone.  There are almost certainly weaknesses in the scanning algorithm, but that's the state of the art for Java based QR Code reading, so it is what is relevant for us.

Also, we tried the same tests with DM codes with similar results.  Newer DM codes support the very same Reed Solomon error correction algorithm, and just like QR codes allow for up to 30% of the data being corrupted before failing entirely.  Again though, if you start messing with the position markers for DM codes, which are the left and right edges, you run into trouble more quickly.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86043 2011-06-24T12:19:00Z 2020-12-14T20:38:15Z Android Ideos and QR-Codes, practical limitations

We're a huge fan of the Huawie Ideos phones.  They are ridiculously cheap, $100 unlocked, and can be bought over the counter at Safaricom in Kenya or about $120 from NewEgg in the states.  That is a crazy pricepoint, totally blowing away any other Android phone and really making J2ME handsets mostly irrelevant as contenders for the kind of ICT4D projects Nyaruka undertakes.

Recently we've been working with a client and have been considering using these as a way of managing and tracking inventory.  After all, the cheapest barcode scanners are around the same price, and you don't get a built in touch screen, GSM connectivity, GPS and the ability to build the entire application on the device.  That makes the Ideos mighty tempting indeed.

One thing we were curious about though is just how well they would perform with high resolution 2D barcodes.  In our case, we'd like to have the option to encode quite a bit of data in the barcodes, so we were curious how the inexpensive Ideos optics would fare.

Methodology

We created and printed 2D bar codes encoding 23, 50, 100, 200, 500 and 1000 characters, then tested them at different sizes.  The codes were all generated at Invx.com, which seemed to do a decent job, even for our monster 1000 character barcode.

We printed each using a laser printer and scaled them from roughly 4"s to 2"s and finally 1" squared.  This was under normal lighting conditions using the most excellent "Barcode Scanner" app in the Android App store which uses the open source ZXing library.  We also tried things out with a Nexus S, and a Sidekick 4G.

Camera Specs

The big knock to the Ideos is that it is a fixed focus camera, despite being 3.15 megapixels.  What that means in this application is that you always need to hold the Ideos at a fixed distance from the code in order to get a sharp picture, which means that you start running into the limitations of the sensor resolution for the higher data QR codes.

The Sidekick4G and Nexus on the other hand, have autofocus cameras, which help greatly.  The Sidekick also has a 3 megapixel camera, but it is able to focus so fares much better.  The Nexus S has a 5 megapixel camera with auto-focus, so should be the best yet.

Scoring

For each phone and barcode, we ran a series of tests and ranked the performance as Fail, Poor, Good or Excellent.  Fail means that we either couldn't get the phone to read the barcode at all, or it was very, very difficult.  Ok means that it took a good bit of searching to read the barcode, but it did so consistently.  Good meant the barcode was easily read when well aligned and Excellent meant the barcode was instantly read even when misaligned or at an oblique angle.  Excellent readings have that magical quality to them where you just wave the phone in the general direction of the code and it picks it up.

Results

So what did we find?

Not surprisingly, auto focus made a huge difference.  Since the Ideos has a set focus point, you essentially always have to hold it roughly the same distance from the code.  That is reasonably easy to do and you get a feel for it, but it means that if the QR code is small, then the sensor won't be able to make out the details.  Here's a quick example that makes it clear:

That's the Ideos quite easily reading the 100 character 2" barcode and struggling on the 1" version.  You can see that the 2" code takes up a greater size so reads easily, but the smaller 1" code doesn't.  We can't move closer to the 1" code to make it fill the screen as then it becomes out of focus.

One interesting learning there is that you can work around that resolution if you have the freedom to make larger QR codes.  The 'optimum' size for a code for an Ideos is probably around 3"'s square, that would fill the frame at the exact distance where it is focused.  We didn't try this out, but I bet you could get a 200 character code reading excellently.

The benefit of the autofocus is obvious when you compare the Ideos results to the Sidekick, which has a similar resolution, though much better optics and autofocus.  The Sidekick has no problem reading even 500 character barcodes at 1".

For all practical purposes the Nexus doesn't do any better, which probably indicates that we are reaching the limits of the optics rather than the limits of the sensor sizes.

Practical Limits

So what are the practical limits?  On an Ideos, I would stick to ~100 characters or less on a 2" bar code, and ~50 characters or less on a 1" barcode.  At those sizes things work brilliantly, instant recognition at virtually any angle.

On an autofocusing device, you have a lot more flexibility.  Even at 1" you can easily decode codes with up to 500 characters.  For QR codes that seems to be the practical limit, though if you are willing to use a DataMatrix code you might be able to squeeze a bit more in.

QR Codes vs DataMatrix Codes

The two biggest 2D barcode standards for this kind of application are QR Codes and DataMatrix codes.  DataMatrix is actually the older standard, but due to leaving out support for Kanji characters, QR Codes were created in Japan and have been taking over.

There are some advantages and disadvantages to each.  QR Codes are more tolerant to being read at oblique or rotated angles.  The guide blocks in the corners seem to help tremendously here and I definitely noticed faster and more consistent reading for QR codes.

However, those blocks come at the cost of data density.  Here are the two 1000 character codes used in these tests, the QR code at left and the DM code at right.  You can see that the DM code requires less resolution to read, so by and large the DM codes tended to fail last, at least when good alignment was used.

Error Correction

One interesting feature of QR codes is that they are a bit more tolerant to errors.  You can cover up the bottom right corner of a QR code and it will usually still read, the DM codes are more sensitive here.  This might be an overstated benefit though, as I didn't have as much luck covering up the top right, left, or bottom left corners of QR codes, where the guide blocks are, so the practical cases where the error correction would come in seem minimal.

Error correction is something you can tune in QR codes, some generators lets you define more redundancy in the generated code.  However, this will presumably come with lower data density, so is probably very specific to your application.

Troublesome QR Code

Strangely, the lowest resolution QR code generated, of 23 characters, actually was troublesome to read in many cases.  Looking at it closely it looks like it contains one less control block than the 50 character code.  You can see that below, the 23 character on the left vs the 50 character on the right.  

This might just be a limitation or bug in the scanning software itself, but does point out that real world testing is still really important.  The DM code didn't display any such issues.

Full Results

Finally, here's the full results if you want to see the nitty gritty details:

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86044 2011-06-15T13:03:07Z 2013-10-08T15:39:51Z Some quick thoughts on Pivot25 The Nyaruka team has been at Pivot25 for the past few days, presenting
Bizito as one of the finalists. It has been a fun time, it is great
to see what the technology scene is like here in Kenya. There is no
doubt that that Nairobi is much farther along than Kigali, but I view
that as encouragement as to what is yet to come for us in Rwanda.

But one thing that is the same, is the constant ask for access to
capital by various companies. A few presentations over the past few
days have had asks for six figures in funding, which gives me serious
pause. It seems like there is a belief that funding is the goal as
opposed to a necessary evil. The truth is just the opposite, as
software engineers, we have the unique advantage that we can build
systems to earn us money without doing physical work. Add to that
that all we really need is an internet connection, some ramen noodles
and an ample supply of coffee and the funding asks really make me
scratch my head.

One interesting exercise is to look at YCombinator as a comparison.
YCombinator is arguably the leading seed funder in the Silicon Valley
today. Companies compete in a similar way as we have here at Pivot25
to be one of the companies to be taken under their wing. And you know
how much they give? $20,000. And this is only the most promising,
most experienced, most qualified companies in the world. Their
graduates include the likes of DropBox and Loopt among countless other
visionaries.

So my advice to some of the less experienced teams. Forget about the
funding and go build your idea, after work if you need to, using a
loan from your friends and family if necessary. If you don't believe
in your idea enough to suffer through eating ramen, or to sell your
family on it, then maybe you don't believe in your idea as much as you
say you do.

My point is the only expensive thing about starting a software startup
is paying engineers, and if you are one, then you should be able to do
it without raising money, and you'll be all the better for it.]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86045 2011-05-04T11:14:00Z 2020-12-14T20:37:06Z Android's Achilles' Heel - The Sim Toolkit

Quite a few people, myself included, believe that Android is going to become absolutely huge in Africa.  Here in Rwanda, smartphones are becoming more and more prevalent for the upper tier consumers.  Though the Blackberry is the only smart phone sold by local carriers, it is not uncommon to see unlocked iPhones as well, no less a status symbol here as the rest of the world.

Of course the beauty of Android is that at any moment one of the carriers here could start selling them, unlike the iPhone, Android is open for anybody to sell on any network.  As a matter of fact, I have it on good authority that the biggest carrier here, MTN, will start carrying some Android handsets soon.  And as the price continues to drop, I'm sure they will become just as popular here as they have become in the West.

That is if it wasn't for one giant gotcha: Android's terrible support for the Sim Toolkit.

Now if you live in the States, you might not even know what the STK is, so a bit of explaining is in order.  Put simply, the STK allows carriers to load a simple set of menus and 'applications' on your SIM card.  Again, on your fancy iPhone, you may question the need or purpose for such a thing, but that's because you are still years behind and using a credit card.  Here, where credit cards are virtually unknown, the present and future of payments is Mobile Money, which is almost always delivered via.. you guessed it, the STK.

See, the STK works on virtually any device, from a $5 Alcatel to a $200 Nokia, these phones all implement the GSM standard and therefore allow anybody, both rich and poor to access services like Mobile Money.

Except if you have an Android handset.

Earlier versions of Android, up to 1.6, actually included a rather rough, but functional Sim Toolkit application, but at some point it was dropped, and even Google's own latest and greatest ROM's for its Nexus One and Nexus S handsets lack it.  As a matter of fact, I don't know of any devices running Android 2.3 that include it.

Thankfully, CyanogenMod has been forward porting the Sim Toolkit for a while now, and sure enough, Cyanogen 7.0 still has it.  But it is a buggy and unpleasant experience.  I tried to activate MTN Mobile Money today using my Nexus One, and half way through the process just gave up and used my $5 backup phone instead.  I can access most menus using Cyanogen, but only by force quitting the Sim Toolkit after every request.

I'm not the only one commenting on this, the web is full of people screaming that their fancy new $500 smartphone is too snobby, too highfaluting, to play with the rest of the world.

So here's a clue Google:

If you want Android to be relevant anywhere apart from the West, then start thinking about how we live day to day.  Build a browser that does wire compression before sending it down (oh hai Opera!), give us finer control over when background data is used, give us USSD API support, and for god sake's, implement a 20 year old standard so we can use the services that make our lives more convenient than yours.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86046 2011-04-27T16:16:00Z 2013-10-08T15:39:51Z On MobileActive's SMS Delivery Results in Egypt

A few days ago, MobileActive posted the results to some interesting work they did in Egypt, trying to quantify SMS latencies and whether they might indicate that filtering is taking place.  Essentially they wrote an Android application that allowed them to easily measure the SMS latencies between networks.  That application sent a variety of messages, some with 'safe' content and others with 'political' content.  The idea being to help quickly identity if networks are filtering or blocking messages based on their content.

While I applaud the effort, and I think it is an interesting project, I feel like there are some flaws in the methodology, significant enough flaws that MobileActive should have resisted even hinting at any conclusions before fixing them.

We do a lot of work with SMS here in Rwanda, we live and breath the stuff really, so we are pretty familiar with just how mysterious latencies can be.  We've done work in Ethiopia with Android phones and seen instant deliveries right after massive delays, all with just boring old numerical messages.  And we've done a lot of work here in Rwanda talking straight to the carrier's SMSC and seen similar things.  To put it simply, even under the best circumstances, making rhyme or reason to delivery failures, much less latency is a really hard task.

And that puts an enormous burden of proof to any experiment which claims to try to corrolate messages latencies to message content, and here I feel MobileActive should have known better and resisted any, even tentative and disclaimed, reporting of possible filtering.

It is hard to know exactly what the data showed, since the data isn't publicly available, but we do know that the study involved roughly 270 messages across a variety of networks.  One of the hypothesis they give is that Etisalat may be filtering messages based on the content of the message.  From the graphs provided it seems we can guess this is based on roughly 70 results, which seems to fit in with their totals.  Here's their graph of those results:

Now it is certainly tempting to look at this and start making hypothesis that political messages are being filtered, but we have to keep in mind our previous caveats about the reliability of SMS deliveries.  Specifically, especially when measuring something as unpredictable as SMS, we need to start with a valid hypothesis and then work out an experiment that makes sense.  Again, we don't know enough about MobileActive's methodology here to draw any conclusions, but here's some things I'd do:

1) Network latency is often cyclical, the network sometimes just acts up for a bit.  So when testing various messages it is important to both randomize the order that the messages are sent, and do multiple passes of the test at various times of the day.  If we skimp on either of these we may just be seing the artifacts of a network under load.

2) Are delays consistent across the same message?  From what I can tell this is the biggest flaw in the tests as a whole.  If every time I send a message saying "revolt now" it takes 20 seconds to deliver, then perhaps we have a case.  But if it is inconsistent, then we really need to start looking at what else can explain that latency.

3) If the hypothesis is that filtering is taking place, what do we think is the mechanism?  Clearly, any filtering that is automated would be completely lost in the noise of normal SMS delivery times.  Even the most sophisticated algorithm would take mere milliseconds to evaluate whether 160 characters should pass or not, you wouldn't be able to detect it via measuring the latency.  If the hypothesis is that some kind of manual filtering is taking place, such that actual humans are looking at the messages, then we should design our experiment to capture that.  For example, perhaps we can try to overload these mechanical turks by sending a large number of political messages in a short time period.  If the delays increase even further, then that's probably an indication that there is some human intervention taking place.  I find it very, very, hard to believe that any carrier has such a system in place that would still result in sub-minute deliveries, but if that is indeed our hypothesis, we could create an experiment to test it.

Perhaps my biggest complaint here is the lack of openness.  In our fast paced age of Twitter and Facebook and flashy headlines, we need to resist the temptation for sensationalism and be rigorous in our methodologies, especially on topics this important.  Not publishing the raw data that these results were based on just isn't acceptable, and if the excuse is one of not enough time, then the original article should have waited until that part was ready.  And just as you should always be skeptical of any benchmark which you can't run yourself, you should also be skeptical of any test using software you can't examine.  Again, lack of time is not an excuse, if you don't have time to make the process transparent, then you should just delay publishing your results.

MobileActive and others all do important work, but we must remember to maintain our standards, our scientific training, whenever sharing important information such as this.  It is all too easy to be drawn to the headline, to be taken over by the excitement of your results, and most importantly, all too easy to see patterns where none really exist.  We have all fallen victim to this, but it is our job as a community to call each other on it, to remind each other to be rigorous in our conclusions, to be peer reviewed and to never forget about that little thing called statistical significance.

I hope MobileActive stays true to their word and releases their Android client.  No need to clean it up guys, we won't judge you!  Let's all work together to get to the bottom of this most intriguing mystery.  We have lots of experience building Android apps and would be happy to lend a hand, as well as give our results on the Rwandan carriers.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86047 2011-04-20T10:14:00Z 2013-10-08T15:39:52Z Git Recipe: Making a Production Branch After the Fact

There are lots of different theories on the right way of branching on Github. One very neat one is flow, but perhaps you haven’t invested in using that for your particular project. In any case, there may just come a time when you decide after the fact that you should have created a remote branch.  Here’s how to do it easily in a few steps.

Step 1) Find the last commit you want included.

This one’s easy, just go to your project, check out your commits and decide where things started going awry. You want the hash for that commit (not tree not parent), as that’s where you are going to branch. It should look something like:

ff96b9182a2648bfb65f

Step 2) Create your new branch.

I assume you already have your repo checked out from Github, yes?  Ok, great, then just go create your new branch.

git branch production ff96b9182a2648bfb65f

Step 3) Add it to Github

Almost there.  Now we just need to push it to Github so others can use it.

% git push origin production

That’s it.  You can checkout, merge and do everything else as you would with any Git branch, and others can track your branch straight from Git.

Ahh, git so powerful, yet so obtuse.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86048 2011-03-22T07:41:00Z 2020-12-14T20:38:26Z Adding a view permission to Django models

A common request in Django is to use Django's permission model to control who can view particular object types.  This is pretty natural, especially as Django already provides default permissions on all models to 'create', 'modify' and 'delete'.

There are various answers to that question on the web, but none of them seemed particularly elegant.  After some more hunting around, I found that adding the view permission was most naturally done in a post_syncdb hook.  Whenever you issue a 'syncdb' command, all content types can be checked to see if they have a 'view' permission, and if not, create one.

Just grab this gist and throw it in your __init__.py in a management/ directory of one of your apps.  It needs to live under a management directory or it won't be discovered with the syncdb hook.

Once there, it should automatically create a view permission for all your objects upon your next syncdb.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86049 2011-02-15T06:43:00Z 2020-12-14T20:38:32Z How Github destroyed two years of work with a new favicon

Let me first get one thing straight.

I love Github.

I love how they allow me to collaborate with others, I love their simple functional interface, I love their 404 Page and I love, love, the cute Octocat.  They have done source code hosting right, so right that I find myself annoyed when a library I want to use isn't hosted there, because I know that if I want to tweak it a bit, submitting those changes back will be a bit more painful.

In short, I want to marry Github and raise our cute adorable Octocat children into our twilight years.

But this weekend, they drove me a bit bonkers.  I was coding about on some new SMS apps, hooking in my housemate's Kinyaranda dictionary for on the go lookups, and I kept finding myself hunting for my Github tab.

See, Github changed their favicon a few days ago, and my brain still hasn't adjusted.  So everytime I wanted to hop back to Github to read some code in a library I was using, or do a quick search, I got stuck.  I sat there staring at my tabs for a few seconds, looking in vain for the little 'git' beacon.

Don't believe me?  Take the test yourself, try to find the git tab here:

Nice work, you found that quick!  

Ok, now what what about this one?

Took you a tad longer didn't it?

I use Github daily, a ton, and I still haven't totally adjusted to the new Octocat icon.  It just isn't as distinctive as the old icon and my eye doesn't catch it.  I'm sure I will adapt, of course I will adapt, but in the meantime I'm going to waste a second or two every time I'm looking for that damn 'git' tab.

Which got me thinking, just how much time is going to be gobbled up by this innocuous change?

Getting traffic numbers for a website is an imprecise science.  Quancast claims that Github get's 1M unique visitors a month, while Compete says something like 500k a day.  For the sake of simplicity, let's just go with 1M, since I'm mostly interested in how long each user will take to adapt.

So let's say each of those visitors is like me, and gets just a tiny bit confused by the new icon.  Let's say that over the course of the few days it takes to adapt, they waste 15 seconds on Github tab hunts.  Using everybody's favorite method of multiplying small things by big things we get:

15 seconds * 1,000,000 = 15 million seconds

15 M seconds / 3600 seconds =~ 4166 hours

4166 hours / (40 hours * 50 weeks) =~ 2 years

Holy crap!  Github just ate two years of productivity with a favicon change!  Someone ring the alarms!

Ok, so ya, this is all pretty ridiculous.  All sorts of things steal our time every day, let's not even get into how many person years of productivity Facebook has eaten.  ("one billion years!" pinky on lip)  But what it does illustrate is both how integral a part of our work day Github has become for so many of us, and how once a site becomes that integral, then tiny things can make big differences.

That's a lot of responsibility, and something kind of unique about web apps in general.  We don't get to pick when to upgrade to the new favicon.  It gets pushed out, and we adapt, or if the change is bad enough, we leave.  But we don't get to say, no, this is good enough, I'm not upgrading to the Octocat favicon.

Imagine if tomorrow Google decided that Gmail should be served in Esperanto only, because they've decided that Esperanto is the one true language.  What could we do except complain?  We don't get to choose to keep the old native language Gmail, we are forced forward.  And yes, of course even if such a ridiculous scenario happened, the market would adjust and somebody would figure out a workaround, but think of the productivity lost in the process.

That is the double edged sword with our growing dependence on web apps, we get instant updates that we love, and we get instant updates that we hate, all without lifting a finger.  That is a compromise I'm comfortable with, the sites I really rely upon understand that power and exercise caution.  

I trust Github and Google to make good decisions, I trust them to change things out from under me because more often than not those changes are improvements.  But our vulnerability is complete, we show our soft underbelly to these sites every single day, giving them permission to do as they will, only asking that they be gentle.

So it's a good thing Octocats are so cute, otherwise I might never forgive them for the two years of work they just erased from history.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86050 2011-02-14T08:20:00Z 2020-12-14T20:38:52Z Skype's Crazy Regex Easter Egg

Today I was chatting with a colleague on Skype, and as is often the case, misspelled something in my rush to get something across.

A habit I picked up in the IRC days, days I realize have not ended for some, is to use the Perl syntax for doing a regex substitution to indicate a correction.  For example, if you wrote "hello wrld", I would indicate my correction using "s/wrld/world/".  Crazily enough, this now works in Skype.  It just edits your previous post inline!

Some experimentation shows that it isn't a full regular expression engine, it will only do straight word substitution.  It also assumes the 'g' or 'global' flag, so all words that match your string will be replaced, not just the first.

But it is super cool stuff, a brief demonstration:

Noticed how I mispelled word as wrd.  No worries, I can edit that post with the regex:

Hats off to Skype on this one, neat stuff.

Update: I forgot that there's also another IRC'ism that Skype supports.  /me.  You can type "/me is bored" and Skype will turn this into "Nic Pottier is bored" in a specially formatted block.  Anybody know any others?

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86051 2011-01-14T14:43:00Z 2020-12-14T20:39:00Z RapidSMS XForms 0.3.2 Released

Although we've been improving our XForms app for RapidSMS regularly, we recently made some further enhancements at UNICEF's request that make it deserve a new release.  So without further ado, let me present XForms 0.3.2.

The changes made include:

  1. you can now configure which character needs to lead keywords.  By default this is an empty string, but you can set it to a '+' or '-' to ignore all messages which do not include it.
  2. you can now configure which character needs to lead commands.  By default this is the '+' character, but you may change it to '-' or nothing at all.
  3. you can now pick which character (in addition to spaces) can be used to split and group values.  By default fields are separated by spaces when no command arguments are used, but you can change this to be ',', ';', ':' or '*'.  Note that we decided not to support splitting on '.' as it introduced too much ambiguity when trying to parse decimal values.
  4. some significant work has been done to make parsing more tolerant of errors or mistypes.  Specifically, keyword matching now uses edit distance to try to account for typos.  We have also made the parsing of commands and separators a bit more tolerant, though it obviously does not work in all cases.
  5. you can easily add new custom field types, with their own parsing.  This can allow you to create a new field type such as 'age' and have it parse items such as '6mos' or '1y', putting the result in a numeric field representing the number of days.
  6. we've added templating support for the responses.  You can refer to variables which were parsed by their command and the templating syntax is the same as Django's.  IE, you can do something like: "Thanks {{ name }}, your report is confirmed."

We've also beefed up our unit tests even further, demonstrating all the above features, so you should look there for examples dealing with code.  The unit tests demonstrate both creation of custom fields and custom handlers.  You can also check the documentation for further help.

You can find the 0.3.2  release in PyPi or the latest code on GitHub.  We recommend using the PyPi version unless you need something specific on tip.  We had some bugs in our packaging earlier that made that more difficult but they should be resolved now.  Our thanks go to UNICEF for allowing us to continue working on this fun project.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86052 2010-09-27T12:49:00Z 2013-10-08T15:39:52Z Debugging Port Problems

I’m playing with getting Insight deployed on EC2 and as with anything like this, there are an awful lot of variables when debugging issues. Specifically, for whatever reason I couldn’t access our Django instance on port 8000 despite what I thought was opening it up in the EC2 security group. Even more puzzling, Hudson, running on port 8080 was running just fine.

This can be a variety of things. First off, you need to make sure that Django is binding to the right IP address. This can be a bit tricky on EC2, as there are both public and private IPs. But this can easily be solved by binding to all interfaces, 0.0.0.0.

% ./python manage.py runserver 0.0.0.0:8000

Sadly, that didn’t fix things for me.

Next up, can we see the IP from the box itself? An easy way to test that is to log in and then try to telnet to the port, doing a simple GET for sanity:

% telnet localhost 8000
Trying 184.73.211.105...
Connected to ec2-184-73-211-105.compute-1.amazonaws.com (184.73.211.105).
Escape character is '^]'.
GET /


..
..

Ok, so that was working too. After verifying that the EC2 ports were indeed open and scratching my head further I decided to use a port scanner, namely nmap, to double check what ports were accessible, both from my machine at work, and from one of our other servers. That’s when I finally hit jackpot.

From one of our outside servers:

root@hq01-bot nicp]# nmap 184.73.211.105
Starting nmap 3.70 ( http://www.insecure.org/nmap/ ) at 2010-09-27 05:46 PDT
Interesting ports on ec2-184-73-211-105.compute-1.amazonaws.com (184.73.211.105):
(The 1656 ports scanned but not shown below are in state: filtered)
PORT     STATE  SERVICE
22/tcp   open   ssh
80/tcp   open   http
8000/tcp open   http-alt
8080/tcp closed http-proxy

And from my local box:

lagom:~ nicp$ nmap 184.73.211.105
Starting Nmap 5.21 ( http://nmap.org ) at 2010-09-27 14:46 CAT
Nmap scan report for ec2-184-73-211-105.compute-1.amazonaws.com        (184.73.211.105)
Host is up (0.32s latency).
Not shown: 997 filtered ports
PORT     STATE  SERVICE
22/tcp   open   ssh
80/tcp   open   http
8080/tcp closed http-proxy

So, sure enough, my work ISP is for some reason blocking port 8000, but not port 8080. Arr!

Well at least I know what the problem is now, and that’s half the battle. If that didn’t work, my next step would have been looking to see if my instance had any firewall enabled (Ubuntu doesn’t out of the box).

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86053 2010-08-12T12:45:00Z 2020-12-14T20:39:11Z Adding a Django app to Python's Cheese Shop (PyPi and setuptools)

We’ve been doing quite a bit of work on RapidSMS as of late, and one of the neat projects we did was to build an interactive form builder for both SMS and simple XForms. You can check out a little video of it in action if you’d like, it’s pretty neat.

Anyways, RapidSMS has finally been moving to coming in a packaged form, and a few people have showed interest in using our XForms app so I thought I’d see what it took to bring it into the Cheese Shop for easy pip install goodness.

So here’s my little recipe for the next time.

Register with PyPi

First things first, you’ll want to go go create an account on PyPi.

Directory Structure

Next up, package structure.  There’s a lot of debates on this, but here’s what I think seems reasonably sane and is how our rapidsms-xforms project is organized:

rapidsms-xforms/
     setup.py          ; our main setup file
     MANIFEST.in       ; definition for extra files to include
     README.rst        ; the README.rst we'll show in PyPi

     docs/             ; our Sphinx docs
     rapidsms_xforms/  ; our Django app code and templates

The three files that are interesting to us here are setup.py, MANIFEST.in, and README.rst. We’ll go into those in detail. You’ll notice that the name of our project on github is actually rapidsms-xforms, while the package name itself is rapidsms_xforms. Converting dashes to underscores seems to be the convention.

Setting up setup.py

Ok, let’s look at our setup.py. Here’s ours:

from setuptools import setup, find_packages

 setup(
    name='rapidsms-xforms',
    version=__import__('rapidsms_xforms').__version__,
    license="BSD",

    install_requires = [
        "rapidsms",
        "django-uni-form"
    ],

    description='Interactive form builder for both XForms and SMS submissions into RapidSMS',
    long_description=open('README.rst').read(),

    author='Nicolas Pottier, Eric Newcomer',
    author_email='code@nyaruka.com',

    url='http://github.com/nyaruka/rapidsms-xforms',
    download_url='http://github.com/nyaruka/rapidsms-xforms/downloads',

    include_package_data=True,

    packages=['rapidsms_xforms'],

    zip_safe=False,
    classifiers=[
        'Development Status :: 4 - Beta',
        'Environment :: Web Environment',
        'Intended Audience :: Developers',
        'License :: OSI Approved :: BSD License',
        'Operating System :: OS Independent',
        'Programming Language :: Python',
        'Framework :: Django',
    ]
 )

The interesting bits to pay attention to:

  • we’ve added a __version__ field to our packages __init__.py, and we pull the PyPi version from that file.
  • we add our dependencies to install_requires, pip will do the rest.
  • the code we want to include is in the rapidsms_xforms package, we just include that in the packages list.
  • we are going to use the contents of the README.rst file, which is what github shows by default on our repo page, to also be our long description on the PyPi pages.

The rest is pretty much boilerplate, obviously substituting your own packages information.

Adding non .py files

Now as is this will work reasonably well, but we’ll be missing our static and templates subdirectories in our rapidsms_xforms dir. By default setuptools is only concerned with .py files, so we have to specify the rest ourselves. That’s where the MANIFEST.in file comes in:

include README.rst
recursive-include rapidsms_xforms/static *
recursive-include rapidsms_xforms/templates *

Here we specify what other files apart from Python files we want included in our package. We are going to throw in our README.rst, because who doesn’t love those, and then also our static and templates subdirectories.

Testing things out

Ok, let’s test this out by building a source distribution package:

% python setup.py sdist
    .. gobs of output ..
% ls dist
rapidsms-xforms-0.1.0.tar.gz

Hooray, that looks like a success. Before uploading to the Cheese Shop, I’d recommend untarring that file and making sure the contents look like you expect. Better yet, set up a virtual environment, then throw that package in there and test everything out. Repeat as needed until you feel confident it all looks good.

Uploading to PyPi

Alright, time to upload it to the Cheese Shop, here we go:

% python setup.py register sdist upload
running register
running egg_info
  .. gobs of output ..
Submitting dist/rapidsms-xforms-0.1.1.tar.gz to http://pypi.python.org/pypi
Server response (200): OK

If you get a 200 back, then things probably worked, wahoo.

Installing

That’s it. You should now be able to find your package on PyPi. You’ll also be able to install it using pip:

% pip install rapidsms-xforms

Updating

When you need to update, just update your version in your __init__.py, then rerun the upload command, the rest is taken care of for you.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86054 2010-07-29T13:19:00Z 2013-10-08T15:39:52Z Apache2, HTTP Digest Auth and MoinMoin

The MoinMoin wiki has to be where documentation goes to die.  I've never worked with a piece of software that has such crummy docs, everything is half done etc..  It makes a strong argument towards never using a Wiki for documentation.

Anyways, here's my recipe to get HTTP Digest authentication working with MoinMoin on apache2.  It wasted some of my time to get it all right, so hopefully this helps someone since the docs won't.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86029 2010-07-29T10:19:00Z 2013-10-08T15:39:51Z Making MoinMoin, Pygments, RestructuredText and CodeMirror all play together for the ultimate Wiki

I recently had to put together a wiki for the RapidSMS project and one of our goals was to make the content as similar to Sphinx documentation as possible. The idea was that as certain pages on the Wiki matured, they could easily be rolled into the official Sphinx docs, all without changing the actual format.

MoinMoin 1.9.3 actually supports Restructured Text as a markup, the same format as Sphinx, and it also support Pygments to do syntax highlighting, also the same as Sphinx, but the two didn't play well together: although you could create pages using RST, Pygments would not highlight code segments in them.

So I hacked on MoinMoin a bit to make it work. It isn't the prettiest of patches, but they rarely are. Here's a ReST page with some highlighting:

The next problem was that I wanted a slightly better editor for RST. I've used CodeMirror in the past and always thought it struck a nice balance of performance and functionality. Sadly, there wasn't an RST parser written for CodeMirror, so I threw a very rough one together.

More ugly MoinMoin hacking let me insert it as the default editor. MoinMoin really, really needs to adopt a templating language, it is especially ridiculous for me to edit a .py file to add some javascript. In any case, here's the editor:

You can choose to add just the Pygments highlighting to ReST or to also include the CodeMirror editor, up to you. Grab the gist and check out the README for installation instructions. It is slightly laborious, but mostly just copying files around. Hope you find it useful. You can check out the version we have running on the RapidSMS wiki.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86030 2010-07-23T07:43:00Z 2020-12-14T20:39:17Z Offline CHM references for Python, Django, CSS and JQuery

As I've detailed before, our internet here in Rwanda can be a bit flakey.  As such, it is pretty useful having documentation for the libraries we use available when the network is down. (or when it is just really slow, which is often)

I used to use the HTML version of the Sphinx docs for Python and Django, but discovered CHM (Compiled HTML) the other day and it has even better search capabilities.  On OS X the best CHM client I've found is ArchMoch, though there seem to be some files it doesn't get along with.

Anyways, to save you the time of tracking all those CHM files down, here they are hosted off my DropBox:

Python 2.6.2 CHM Docs

Django 1.2RC1 CHM Docs

JQuery CHM Docs

Note that the JQuery CHM doesn't seem to have an index built for it, so searching isn't quite as great for it.

I haven't found a good CSS reference in CHM format, my ideal would be something like the sitepoint or w3 schools references.  Maybe if I get the time I'll compile one later.  But I did find a nice little cheatsheet for CSS:

CSS CheatSheet PDF

Of course, even having these you won't get the full power of Google's amazing answering powers, but it is something.  If you have any to add, let me know and I'll link them up.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86031 2010-07-20T19:55:00Z 2020-12-14T20:39:46Z Why Rwanda's internet isn't the third fastest in Africa

A few days ago, some articles appeared in the New Times and the Rwandaise, boasting about how Rwanda has the fastest internet speeds in the region, and the third fastest in Africa, beating even South Africa.  The articles used the recent results posted by Ookla as a reference, which does indeed show Rwanda as 89th in the world, with download speeds of 2.36Mb/s.

Now for anyone in Rwanda, this probably seemed a bit surprising.  I have an unhealthy obsession with bandwidth, so I know first hand that we get nowhere near 2 Mb/s, that much is clear every single day as I go about my work at Nyaruka.

So what explains the discrepancy?  Is Ookla wrong?

The answer lies in how sites like speedtest.net came about and what they are trying to measure.

Ten years ago, DSL and Cable were just starting to make their presence in the United States.  There was fierce competition from various providers on pricing and claimed bandwidth, but few ways for consumers to judge how fast their service really was.  To address that need, sites like dslreports.com and speedtest.net came about.  Their goal was to measure the speed of the DSL and Cable connections.

The way these sites work is quite simple.  Speedtest.net gets various internet providers to host a large file on one of their servers.  Then you and I go and try to download the file, timing how long it takes, this is what is happening when you are measuring the bandwidth using the speedtest client.  They divide the size of the file by how long it takes and you have a rough approximation of your bandwidth speed.

Now in order to accurately measure the speed of only the DSL or Cable portion of the connection, you need to have the shortest path to the test file you download.  Speedtest.net has done a good job of getting these files hosted on various servers across the world.  And one of those places happens to be the Rwandatel offices in Kigali.  This is where the validity of the tests gets into trouble.

A diagram might help explain.

This diagram gives a simple view of what a typical set up might be in America, in this case Atlanta, Georgia, where one of my friends Joe George lives. (more on him later)  The green arrow represents his connection to his internet service provider (ISP).  In his case, he's using cable, which gets about 10Mb/s.  Since his ISP is also a speedtest.net test site, when he tests his connection, he is testing his cable connection only.  Speedtest.net is NOT testing how fast he can access the rest of the internet, only the connection between his computer and his ISP.

That is valid, because his internet provider has a far far faster connection to the internet.  The purple arrow represents that, the connection between the ISP and the rest of the internet.  In Atlanta, that is fiber, probably multiple fiber optic lines from different providers, incredibly fast and reliable.  So the bottleneck for Joe is almost always going to be his connection to the ISP, the green arrow, not his ISP's connection to the internet (the purple arrow).

Ok, so now let's take a look at the same situation in Rwanda.

Rwanda has a pretty great network internally.  We have a fantastic 3G network from three different providers, at reasonable rates, which makes getting a connection quite accessible.  But what we don't have, is a great connection to the internet.

Here, the green arrow represents a typical 3G modem, which tops out about 2.5Mb/s in practice.  That's fast, and you'll notice that's also about the speed that Ookla says our internet is.  The reason for that is that Rwandatel has a server based in Kigali that is a speedtest.net testing site.  So when you go to speedtest.net and pick Kigali as your testing location, you are actually testing the green arrow only, how fast your internet is to the Rwandatel server in Kigali.

That works fine in the case of Joe, where the ISP's connection to the internet is far far faster, but it doesn't work in Rwanda.  Our ISP's connection to the internet is still very slow, either going over satellite or microwave, the purple arrows here.  So although we can reach our ISP very quickly, as soon as we try to reach the rest of the world, the rest of the internet, then our speeds slow down to a crawl.

This is pretty easy to test in practice, you just need to change which site you are testing against when you go to speedtest.net.  Yesterday evening I did just that using my Rwandatel connection and here were the results.

Kigali to Kigali: 2.12 Mb/s

Kigali to Kampala: 0.11 Mb/s

Kigali to Atlanta: 0.09 Mb/s

So we see there that yes, our connection to Rwandatel is indeed very fast, but as soon as we try to leave Kigali, it slows down to a crawl.  That's because the connections from Rwanda to the world are still very, very slow despite our great internal network.

We can validate this by going in the opposite direction.  Remember Joe?  Well I asked him to use his crazy fast internet to test against the same sites, here's what he got:

Atlanta to Atlanta: 10.01 Mb/s

Atlanta to Kampala: 1.15 Mb/s

Atlanta to Kigali: 0.10 Mb/s

So here we see that the problem is actually Rwanda's connectivity to the world, not our connection to our providers.  We also get a clue that Kampala, which has some fiber, is actually better off than we are, despite their global rank being lower according to Ookla for upload speeds.
 
One last diagram helps there:

Here we see that in Kampala, while their ISP's do not provide quite as quick of an internet connection on average (the green arrow), their connection to the rest of the world, the internet at large, is much faster than Rwanda's.  So although Ookla, which measures the green arrow, says they are slower in some cases, that is actually incorrect in practice.

The silver lining with all this is that backbones are on the way, and once hooked up and given they have sufficient capacity, those claims made by Ookla may actually become true.  Our bottleneck is our provider's connection to the internet, so once that is fixed we will indeed have much, much, faster internet.

You can actually get a taste of that if you use the internet at unusual hours, like 2AM.  At that time of the day, there aren't enough users to saturate the connections Rwanda does have, so the speed is great.  But we need the fiber to be hooked up before we can honestly claim to have fast internet.  That's a day I'm looking forward to, hope it happens soon.

PS. In the interest of verification, here are all the internet speed test results we used for this article

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86032 2010-07-19T09:03:00Z 2013-10-08T15:39:51Z How to update git submodules

Git submodules are a convenient way to create a dependency on another git project.  This can be useful if you depend on another project and want to make it easy for others to get your system up and running.

I won't cover creating submodules here as that is pretty well convered in the git manual, but updating them is slightly less obvious.  When creating submodule dependencies, git actually saves a reference to the version of the tree at that point in time.  That's a good thing, it prevents things from breaking due to your dependency changing out from under you.  But there are times when you'll want to update that reference.  The trick is to realize that git is keeping a reference to whatever version of the git tree is currently checked out.  So all you need to do is go and pull the submodule, then commit the project as a whole.

  1. Make sure that your submodule is already checked out, ie, do 'git submodule init', 'git submodule update' if necessary
  2. CD into the root directory for your submodule.
  3. Change the branch for the submodule to master: 'git checkout master'
  4. Pull the latest version: 'git pull'
  5. That's it, now you can go back to your project's root directory, do a 'git status' to see the state of things, add the changed submodule with 'git add' then commit your changes with 'git commit'

That should do it.  I'm still coming to terms whether I like this particular style of dependencies, for those of us with limited bandwidth it does cause rather large repositories to be sent over the wire just to get things running.  Having dependencies be managed via PiPi in Python is certainly the better option if possible.

]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86033 2010-05-26T13:24:00Z 2020-12-14T20:39:35Z Rwanda Mobile Carriers Cheat Sheet

Update: It's been a long time since this post first came out, so long that Rwandatel doesn't even exist anymore and Airtel is the new carrier in town. A few of these codes still work, but if you are in East Africa in general and have an Android device, download SimBuddy from the Play Store for an easy way to get access to all these shortcuts.

Since I'm always playing with all three carriers either for data access or to test something I figured I'd put up a quick cheat sheet for the various carriers.

MTN

  • check balance: *110#
  • add credit: *111*[code]#
  • apn: internet.mtn
  • buying internet bundle (1000 MB): *345*1000#
  • checking bundle: *345#

Tigo

  • check balance: *131#
  • add credit: *130*[code]#
  • apn: web.tigo.rw

Rwandatel

  • check balance: *221#
  • add credit: *220*[code]#
  • apn: rtel3g
  • username: rwandatel
  • password: rwandatel
  • auth type: chap
]]>
Nicolas Pottier
tag:blog.nyaruka.com,2013:Post/86034 2010-05-06T15:48:00Z 2020-12-14T20:39:29Z “I'm moving to Rwanda to start a software company.”

What usually comes next is a blank stare.  I can see the gears turning, see them use what little they know about Rwanda and try to resolve that with a technology startup.  So to answer all those blank stares, I thought I'd write up how this came to be, why it isn't crazy at all but rather so obviously right.

The story starts in the summer of '09.  My friend Nell Grey, a film maker, had gone to Arusha in the fall of '08 to help film interviews with prosecutors and other staff involved in the UN trials surrounding the Rwandan genocide.  That project, Voices from the Rwanda Tribunal, was returning to Rwanda that summer, and she suggested that maybe I could be of use helping them find ways of making the content available in creative ways.

Now I should preface this with the brief aside that I'd been looking for a way to 'give back' for a while, but wasn't sure how exactly my set of skills fit in.  My interest was in the developing world but I was neither a doctor nor engineer.  I believed in my own ability to figure things out, to be able to help somehow, but had to be honest with myself that I didn't seem particularly qualified to help.

One small part of the project's plans for the trip was to make segments of the interviews available to the public.  We brainstormed on various ways of doing so, from distributing CDs or DVDs, to radio broadcasts, USB keys etc..  After some research it seemed to me that one promising approach might be to try to deliver the content via phones. 

The penetration of mobile phones in Rwanda is roughly 30% and growing quickly.  That might seem low until you realize that the penetration of electricity is only 10%.  Most of the country has cell coverage, even the rural areas, and there is fierce competition from three different carriers.  Handsets are available for just a few dollars used and once you have a phone, you can walk into any number of shops and walk out five minutes later with a new number, without contracts to sign and without any commitments.  You only pay for outgoing calls or text messages, and money you add to your account is valid for 18 months, so the economic cost for a cell phone is tiny.

My proposal was to deliver short, key, segments of the interviews using an SMS callback system.  A user would send a text message to a number, specifying which segment they wanted to hear, and our system would call them back, playing the segment and allowing the user to leave a voice comment.  The user would only pay for the cheap outgoing text, and we could not only reach a broad audience but also collect their thoughts.  I spent a weekend hacking together a system that worked on a laptop, crossed my fingers and hopped on the plane for my three weeks in Kigali.

That's when I got my first lesson that the West, and I include myself in that bucket, is terrible at evaluating the needs and demands of the developing world.  My exposure to Rwanda's genocide was limited at this point, though I was the technical counsel for the project, I only knew what I had read.  Once we arrived, we found that the general population wasn't terribly interested in the UN's ICTR process.  They viewed it as ineffective, taking a decade to try a few dozen people, and largely irrelevant to their own problems of trying to put a country back together, of dealing with hundreds of thousands of genocide suspects.  They had crafted their own solution to that in the Gacaca court system and weren't particularly interested in the pontifications of various UN staff on their own process.  There was much good that would come out of the Tribunal Voices project, but my part wasn't going to be it.

But from that failure came a silver lining.  It freed me up to spend more time learning about the state of software in Rwanda, to learn of their goals and ambitions.  I was soon exposed to Rwanda's Vision 2020 Plan, a concerted and well organized effort by the government to reinvent the country in the next decade.  Part of that vision is to set the groundwork for an information based economy.  Rwanda, a land locked country, with very few natural resources and no large tourist draw, is in a tough place when it comes to growing more prosperous.  You can't set up manufacturing facilities in a country where transportation costs are so high, and the largest cash crops of coffee and tea bring in less than $100M a year combined, a pittance for a country of ten million.  So Rwanda has made the bold decision to try to build their economy on information instead.  In a digital world of bits and bytes your transportation costs are zero and there is no reason why a country in the middle of Africa can't participate in the global software economy.  All it needs is the infrastruture and people.

Rwanda is making great strides on both fronts.  The country will soon have multiple fiber optic connections to the internet, replacing their slow and expensive satellite uplinks.  Internet kioks are planned for the rural areas and they have even decided to participate in the One Laptop Per Child (OLPC) project, setting the stage for the entirety of their next generation to be computer litterate in grade school.

The people are there too.  Computer Science curriculums are full at the universities, students eager to take part in the new economy.  Individual entrepreneurs are everywhere and more and more small software companies are popping up.

But one thing they don't have is experience.  Classes are filled with bright students, but many are being taught by professors who have never written software themselves, how could they have?  So the learning is largely book based despite computers being available, because some of the teachers themselves are nervous about coding.  What companies do exist are still working through the growing pains of how to build software repeatedly.  They are doing their best and learning quickly, but still at a disadvantage, the field brand new to them.

Here I saw an opportunity.  There was clearly a need, a need for experience in building software, a need I was actually perfectly suited to satisfy.  Along the way I had met a professor and software engineer in Kigali, Emile Bniz NIYIBIZI, and through him I did some teaching at a local university, doing a few lectures on web technologies.  That reminded me of how much I loved teaching, especially software, and helped spawn the idea of starting a software company there.  A company that would hire and train local Rwandans to build software the way I knew how.  I spent my last few days talking to various Rwandans about this, and what I heard back was an enthusiastic yes, yes that it was needed, yes that it would work, yes to do it.

I returned to Seattle excited, but still unsure as to how to move forward, conflicted.  On one hand I had found something that seemed like a perfect fit for my desire for adventure, a way to give back that fit my skills, but on the other I had a successful company with my best friend Eric Newcomer.  We had built Trileet up from nothing over the past four years and it was still doing well.  But as it happened, when I talked to Eric about it, his reaction was a shared enthusiasm.  He too saw the uniqueness of the situation, and was excited by it.  So with my best friend and business partner on board, we flew back to Kigali in January, this time to really evaluate how realistic we were being and whether to move forward.

As we prepared for the trip, we started contacting others doing similar work.  One particularly fortuitous connection was with Matt Berg, who hooked us into the small but growing world of RapidSMS and UNICEF. 

SMS is being used in more and more projects in Africa, as a way of both collecting and spreading information.  Its incredible penetration and simplicity is well suited to the challenging environments there, and the applications using it are only limited by your imagination.  Many countries have implemented systems which allow farmers and fishermen to access the local market prices for their goods, reducing the information inequality present when selling to middlemen.  Other systems have been built to help keep track of drug or blood supplies in clinics, allowing clinics with no power, phone or internet connections to be part of regional supply systems.

As it worked out, Matt was going to be in nearby Uganda meeting with Sean Blaschke at UNICEF and various others.  We tweaked our plans a bit and ended up meeting them both for a weekend in Kampala.  What we found was exciting.  The number of projects involving SMS was only growing, and there was a high demand for companies fluent in those systems, doubly so if they were in Africa.  We also found out that there was a UNICEF project currently stalled in Kigali.  They had a design and prototype for an SMS system to track maternity and child health, but needed an engineer to help train local developers and deploy that system, all as soon as possible.

This seemed like too good of an opportunity to pass up, so I decided to prolong my stay in Rwanda for another month.  That month only reinforced our belief that this made sense.  The system was exciting to build, both in helping train the developers, and in the final product.  It did real good, had a real potential for impact, moreso than any other product I had ever worked on, and my skills and experience couldn't have been more perfectly suited.

Upon my return we started the process of setting up Nyaruka in earnest.  I started packing up my place, Eric and I discussed how we would structure things and we began planning.  We aren't so naive as to think we know how things will really turn out, but the current plan is this: Our company will be based in Kigali, where we will find bright, motivated students to train on Python, Django and RapidSMS.  Initially, we will specialize in building SMS based systems.  From there we will see.  The software market in Rwanda, and East Africa in general is wide open.  Opportunities abound for building software for the government and private sectors, there is even opportunity to help build localized software for the OLPCs, bringing our experience making games to bear on interactive learning software, another of our passions.

I can say without hesitation that this is the most exciting opportunity I've ever had.  On some level it feels too easy, too perfect, so I have to temper that excitement with a little forced pessimism.  But in the end, it comes down to having the chance to take the one thing I am best at, building software, and do good with it.  And that is exciting on a whole different level.  That I get to do it with my best friend makes it doubly unbelievable.

We'll do our best to keep this blog updated with our progress, our challenges, our successes and even our failings.  We don't expect to succeed everywhere or everytime, but we'll try our best and learn from it, and hope to share those lessons here.  We'll see you then.

]]>
Nicolas Pottier