Which Countries Have the Most Skilled Female Developers?

 

 

 

 

 

 

 

 

 

 

It’s no secret that there’s a gender gap in coding.

Women make up less than a third of the tech talent pool in Silicon Valley. At Google and Facebook, women make up just 17% and 15% of technical positions.

One of the problems could be that recruiters look for talent in the same places — MIT / Stanford — leading to bidding wars for pedigreed candidates and the illusion of a skills shortage, especially among underrepresented groups. When, actually, companies are overlooking thousands of qualified female developers simply because they don’t look good on paper.

We were curious to find out: Where do the best female developers live? We decided to analyze our data of 2M+ developers.

At HackerRank, we create coding challenges to help find top talent and help developers get jobs. Hundreds of thousands of developers from all over the world participate in challenges in a variety of programming languages and knowledge domains, from Python to artificial intelligence to distributed systems.

About 17% of people who have solved HackerRank coding challenges are female. According to our data, India and Italy have the highest percentage of female developers, while women from Belarus, China and Russia score the best. We also found that female developers are unusually likely to take our Java challenges, and are not drawn to our challenges on security or artificial intelligence.

***

We began our analysis with an attempt to assess exactly how many HackerRank test takers are female. Though we don’t collect gender data from our developers, we were able to assign a gender to about 80% of developers based on their first name. (We did not include first names with equal gender distributions, such as Taylor or Riley.)

The vast majority – 82.9% – of developers on HackerRank are male. Though the gender balance is far from equal, it’s a significantly more balanced than the 5.8% female StackOverflow survey result.

Next, we looked for trends within countries. Below is a breakdown of the share of female developers from each of the top 50 countries with the most developers on HackerRank. For each of these countries, thousands of developers have participated in a HackerRank challenge.

Russia’s female developers, who only account for 7.8% of Russian HackerRank developers, top the list with an average score of 244.7 on algorithms tests.

Russia is closely followed by its European counterparts in Italy and Poland. Though India has the largest share of female developers, they rank 18th, with a middling average score of 146.2 points.

***

So what does looking abroad teach us about the gender gap in coding?

For starters, we see further evidence that the United States, which ranks 11th in terms of percentage of female developers and should improve…especially in regards to Java development. In comparison, women in India are growing up with coding stitched a little closer into their culture.

But we also see an encouraging sign for women who find themselves working in a male-dominated industry. Relatively few women in Belarus, China, and Russia participate in coding challenges. But their female developers—despite these challenges—are still crushing it.

Is diversity hiring one of your goals?
Learn how to find more female developers.

India, which contributes nearly 40% of HackerRank’s developers, leads the pack with about 23% female developers. Experts have found that India’s education and tech industry culture is more conducive to gender equality in computer programming.

The United States, the second largest contributor of developers, falls just shy of the top 10 with 14.8% female developers. Chile had the lowest representation of women, with fewer than 3% female developers.

When women do take our challenges, which computer programming domains are they particularly drawn to? The chart below shows the percentage of female test takers for each of our challenges.

Women account for 21% of developers in tutorial and Java challenges. The tutorials domain includes our 30 Days of Code Challenge, which is heavily Java-based. Women spend the least time on artificial intelligence and security challenges.  

 

Looking to hire more female developers?
Learn more about skills-based hiring to boost diversity.

So, does more female developers mean better female developers?

We looked at women’s average scores on algorithms challenges (which account for more than 40% of all HackerRank tests taken) to find out. Algorithms challenges include sorting data, dynamic programming, searching for keywords, and other logic-based tasks. Scores typically range from 0 to 115 points. We examined the 20 countries with the most female developers in order to have large sample sizes.

 

Russia’s female developers, who only account for 7.8% of Russian HackerRank developers, top the list with an average score of 244.7 on algorithms tests.

Russia is closely followed by its European counterparts in Italy and Poland. Though India has the largest share of female developers, they rank 18th, with a middling average score of 146.2 points.

***

So what does looking abroad teach us about the gender gap in coding?

For starters, we see further evidence that the United States, which ranks 11th in terms of percentage of female developers and should improve…especially in regards to Java development. In comparison, women in India are growing up with coding stitched a little closer into their culture.

But we also see an encouraging sign for women who find themselves working in a male-dominated industry. Relatively few women in Belarus, China, and Russia participate in coding challenges. But their female developers—despite these challenges—are still crushing it.

 

Is diversity hiring one of your goals?
Learn how to find more female developers.

Introducing Cracking the Coding Interview Tutorial & New Study on Interview Practice

How many practice coding challenges does it take to ace your coding interview? In celebration of the launch of our newest Cracking the Coding Interview tutorial series, we did a study on coding interview practice. Our goal was to uncover just how much practice you need to boost your chances of passing a coding interview by 50 percent, depending on your experience.

At HackerRank, we regularly help developers improve their coding skills and find the right job (rather than traditional proxies like resumes). We’ve assessed approximately 3 million developer candidates using coding challenges since 2012. Our coding assessments also help developers go straight from application to onsite interview, based on their performance.

For this study, we looked at practice submissions of over 2,000 developers to find patterns of folks who went directly from assessment to earning an onsite interview. By learning the correlation between the number of practice coding challenges solved and the pass rate on a coding assessment, we can quantify the amount of practice you need to pass a coding interview. According to our data, developers with at least two years of experience, who practiced even just a little (20 challenges) increased their chances of getting an onsite interview by 50 percent. Junior developers who solved 20 challenges, increased their chances by at least 15 percent.

Introducing Cracking the Coding Interview Tutorial

Screen Shot 2016-09-27 at 7.27.07 AM

 

 

 

 

 

 

 

 

 

Quantity of coding challenges is certainly important. However, one way to increase your chances of acing your interview even further is by solving the right type of interview challenges. We’ve teamed up with author Gayle Laakmann McDowell who wrote the best-selling book Cracking the Coding Interview.  Together, we’ve curated a video tutorial series of roughly 20 challenges to ensure you pass with flying colors. Cracking the Coding Interview coding challenge series is now available, featuring Gayle in 20 accompanying video tutorials.

In the series, Gayle offers not only video tutorials but also valuable advice, like three strategies to tackle algorithms and a seven-step process to solve algorithm challenges.

 

***To begin our analysis, we pinpointed our question: Is there any correlation between developers who solved a lot of challenges and developers who passed coding assessments? And how does performance relate to seniority?

Methodology. Our sample of 2,000 developers solved anywhere between 0 to 80 challenges over the last year. Here’s the breakdown of the percentage of developers by number of coding challenges solved:

Screen Shot 2016-09-27 at 8.17.30 AMOur initial hunch was that junior developers would perform better than more experienced developers because we know that junior developers perform well on the computer science fundamental portion of the interview, like algorithms and data structures.

So, first we divided our sample size between junior and experienced developers. We define “experienced” as someone who has at least two years of experience. Then, we eliminated anyone with “0” submissions since we don’t know much about those developers. They could have been practicing on their own, so we wanted to focus on developers who are active on our platform.

The following graph shows the relationship between number of challenges solved by an experienced developer and the percentage of people who directly earned an onsite interview:


Developers with 2+ Years of Experience Need to Solve ~20 Challenges to Boost Test Performance by 50 Percent

Screen Shot 2016-09-27 at 8.18.52 AM

 

 

 

 

 

 

 

 

 

 

Some say that there’s a golden rule of 10,000 hours of practice before mastering a skill. Developers with just a little bit of practice (solved roughly 20 challenges on HackerRank) had about a 50 percent higher chance of going straight to the onsite interview than senior engineers with no practice.

 

More specifically, developers who solved ~20 challenges had ~35 percent pass rate, whereas developers who solved 0 challenges had a 24 percent pass rate. As a result, developers who solved ~20 challenges increased their chances by 50 percent. Generally, it takes anywhere from 10 to 20 hours for an experienced developer to solve 20 challenges, though it really depends on the person.

Junior Developers with Less than 2 Years of Experience Need to Solve 30 Challenges to Boost Chances of Passing by ~50 Percent

Screen Shot 2016-09-27 at 8.19.26 AM

 

We were wrong about our hypothesis on the performance of fresh grads with less than 2 years of experience. We thought that junior developers would do better than senior developers on coding interviews since they’d perform really well on the fundamentals. It turns out that experienced developers with some practice performed better than junior developers, even on fundamentals. In the first chart, we saw that more experienced developers have a 23 percent pass rate from the get-go. Junior developers, on the other hand, need to practice 71-80 coding challenges in order to reach a 23 percent pass rate.

When you compare junior developers who practiced a little bit (solved ~20 challenges) with junior developers who barely practice at all (~10 challenges), there wasn’t a boost in performance. Junior developers need to practice solving at least 30 coding challenges to gain a 50 percent increase (jump from ~9.5 percent pass rate to 14 percent pass rate).

Note: There is an odd drop at the 61-70 challenge mark for junior developers, which could be related to either anomalies that skewed the data or it could be that fewer developers solved that many challenges. It’s hard to say. By and large, developers with less than two years of experience, need to practice a lot more coding challenges to increase their chances of getting an onsite interview by 50 percent.

***But you’re not alone! If you’re looking for the ultimate resource to prepare for your next coding interview, you’ve come to the right place. It’s not enough to solve 20 to 30 easy random challenges to be 100 percent prepared. To help you practice with the right curriculum, HackerRank collaborated with Gayle Laakmann McDowell to offer a complete guide to acing your next coding interview. 

The new Cracking the Coding Interview coding challenge tutorial series is designed to help you ace your coding interview. We’ve hand-curated 20 critical concepts and coding challenges with fresh new tutorial videos alongside each challenge, featuring Gayle.

Ready to start doing better at coding interviews?

Sign up for Cracking the Coding Interview Tutorial

 

 

Which Country Would Win in the Programming Olympics?

 

Screen Shot 2016-08-25 at 10.13.52 AM

 

Update: This article has been picked up by the Washington Post, Business Insider, eWeek and InfoWorld.


Which countries have the best programmers in the world?

Many would assume it’s the United States. After all, the United States is the home of programming luminaries such as Bill Gates, Ken Thompson, Dennis Ritchie, and Donald Knuth. But then again, India is known as the fastest growing concentration of programmers in the world and the hackers from Russia are apparently pretty effective. Is there any way to determine which country is best?

We decided to examine our data to answer this question: which countries do the best at programming challenges on HackerRank?

At HackerRank, we regularly post tens of thousands of new coding challenges for developers to improve their coding skills. Hundreds of thousands of developers from all over the world come to participate in challenges in a variety of languages and knowledge domains, from Python to algorithms to security to distributed systems.  Our community is growing everyday, with over 1.5 million developers ranked.
Developers are scored and ranked based on a combination of their accuracy and speed.

According to our data, China and Russia score as the most talented developers. Chinese programmers outscore all other countries in mathematics, functional programming, and data structures challenges, while Russians dominate in algorithms, the most popular and most competitive arena. While the United States and India provide the majority of competitors on HackerRank, they only manage to rank 28th and 31st. 

***We began our analysis by looking at which test types are most popular among developers. HackerRank developers can choose to participate in 15 different domains, but some are more popular than others.  The following table shows the proportion of completed tests that come from each domain.

Screen Shot 2016-08-23 at 8.37.07 AM

 

The most popular domain by far is algorithms, with nearly 40% all developers competing. This domain includes challenges on sorting data, dynamic programming, and searching for keywords and other logic-based tasks. For algorithms tests, developers can use whichever language they choose, which may partially explain why it’s so popular. Algorithms are also crucial for coding interviews, so it could explain why more coders would practice algorithm challenges. At a distant second and third, Java and data structures coming in at about 10% each. Distributed systems and security are our least popular tests, though we still receive thousands of completed challenges in those areas.

So based on these tests, which country has the programmers that score the highest?

In order to find out, we looked at each country’s average score across all domains. We standardized the scores for each domain (by subtracting the mean from each score and then dividing by the standard deviation; also known as a z-score) before finding the average. This allows us to make an apples-to-apple comparison of individual scores across different domains, even if some domains are more challenging than others. We then converted these z-scores into a 1-100 scale for easy interpretation.

We restricted the data to the 50 countries with the most developers on HackerRank. Here’s what we found:

Screen Shot 2016-08-23 at 8.42.39 AM

 

Since China scored the highest, Chinese developers sit at the top of the list with a score of 100. But China only won by a hair. Russia scored 99.9 out of 100, while Poland and Switzerland round out the top rankings with scores near 98. Pakistan scores only 57.4 out of 100 on the index.

The two countries that contribute the greatest number of developers, India and the United States don’t place in the top half. India ranks 31st, with an overall score of 76 and the United States falls in at 28th, with a score of 78.

Though China outperformed everyone else on average, they didn’t dominate across the board. Which country produces the best developers in particular skill areas? Let’s take a look at the top countries in each domain.
Screen Shot 2016-08-23 at 8.51.09 AM

 

China did quite well in a number of domains. Chinese developers beat out the competition in data structures, mathematics, and functional programming. On the other hand, Russia dominates in algorithms, the domain with the most popular challenges. Coming next, Poland and China nearly tie for second and third place, respectively.

What explains the different performance levels of different countries across domains? One possible explanation is that Russians are just more likely to participate in algorithms and therefore get more practice in that domain, while Chinese developers are disproportionately drawn to data structures.

Software engineer Shimi Zhang is one such programmer who ranked among the top 10 programmers in our Functional Programming domain. He hails from China’s city of Chongqing, and moved to the US just two years ago to get his master’s in computer science before coming to work at HackerRank.

On the greatness of Chinese programmers, from top-rankedChinese competitive  programmer Shimi Zhang:

In universities and colleges, education resources are relatively fewer in comparison with many other countries, so students have less choices in their paths to programming. Many great students end up obsessed with competitive programming since it’s one of the few paths.

 

China even has a big population of students who started programming in middle school and high school. They’re trying to solve some hard challenges only few people in this world can solve.

 

They even host national programming contests for young programmers, like NOIp (national olympiad in informatics in provinces) and NOI (national olympiad in informatics). And after CTSC (China Team Selection Contest), 4 geniuses go to IOI (international olympiad in informatics), and at least 3 have won a gold medal this year. This has been the trend for nearly 10 years.

 

It’s an even greater achievement considering a  special rule: if you had won a gold medal once, you won’t be selected for future IOI team, that means, most IOI team member from China won gold medal with their first try.

 

Next up, we also compared how the developers in each country split their time up amongst different challenge types and then compared these domain preferences to those of the average HackerRank user. This allowed us to figure out which countries are more likely than the rest to take a test in a particular domain—and which countries are less likely than the rest.

Screen Shot 2016-08-23 at 8.43.58 AM

As the table above shows, China participated in mathematics competitions at a much higher rate than would be expected given the average developer’s preferences. This might help explain how they were able to secure the top rank in that domain. Likewise, Czech developers showed an outsized preference for shell competitions, a domain in which they ranked number one.

But beyond these two examples, there seems to be little relationship between a country’s preference for a particular challenge type and its performance in that domain. We also wanted to know whether countries have specific preferences when it comes to programming languages. Are Indians more interested in C++? Do Mexicans code in Ruby?

The following chart breaks down the proportion of tests taken in each language by country.

Screen Shot 2016-08-23 at 8.49.15 AM

In general, developers of different nationalities participate in Java challenges more than tests in any other programming language (with a few notable exceptions like Malaysia and Pakistan, where users prefer C++, and Taiwan, where Python is king). Sri Lanka comes in at number one in its preference for Java. India, which supplies a big portion of HackerRank developers, ranks 8th.

 

*** While Pakistan, Sri Lanka and Nigeria are currently toward the bottom of the hacker rankings, they can look to Switzerland’s steadfast developers for inspiration. When a HackerRank developer gives up on a challenge before making any progress, they earn a score of zero. Switzerland has the lowest percentage of nil scoring users, which make Swiss coders the Most Tenacious Programmers in the World.

Screen Shot 2016-08-23 at 8.52.01 AM

***Every day, developers around the world compete with each other to become the next Gates or Knuth.

If we held a hacking Olympics today, our data suggests that China would win the gold, Russia would take home a silver, and Poland would nab the bronze. Though they certainly deserve credit for making a showing, the United States and India have some work ahead of them before they make it into the top 25.

 

Practice  coding now.

Hire the greatest programmers.

 

The Immutability of Math and How Almost Everything Else Will Pass

This article was originally published on Forbes

TL;DR: Right now, there’s a cultural push to untie the historical link between advanced math and programming that could partially deter engineers from entering the field. But those who have a strong foundation in math will have the best jobs of the future. Let’s stop separating math from programming for short-term relief and, instead, focus on fundamental, unchanging truths with which we’ll engineer the future.


If you dig deep into today’s discourse on the role of mathematics in programming, you’ll find a sharp, double-edged sword.

On the one hand, people often say that because the number of app development tools are growing, you don’t necessarily need to be great at math to write software today. Amidst a widespread shortage of traditional programming talent, numerous opinion pieces, video interviews with educators and forum questions point to answers that are positioned to ease the apprehension of people exploring the field. And it’s true. Chances are, the average software engineer is not going to need Calculus while coding apps in Ruby on Rails. If you look at any given job requirement, you’d be hard pressed to find probability or number theory next to Java or C++ skills.

Since computer science is a nascent field that sprouted out of mathematic departments, there’s a cultural push to untie the historical link between advanced math and programming that could partially deter engineers from entering the field. For instance, there are literally half a dozen recent articles titled with something like: “You Don’t Have to be Good at Math to Code” (1, 2, 3, 4, 5, 6). Downplaying the importance of mathematical knowledge in software development aims to help make the field less intimidating for entry-level programmers.

But is downplaying the importance of math a sustainable message for future generations of engineers?

On the other hand, software development is quickly shapeshifting. If you discount mathematics, and in turn focus on learning transitory programming tools, you’ll be left without the skills necessary to adapt to emerging computer science concepts that have already started infiltrating engineering teams today. Without expanding mathematical knowledge, these software engineers are going to risk being left out of the most exciting, creative engineering jobs of the rapidly approaching future.

Math is a Veiled Pillar

The reality is that even though most programmers today don’t need to know advanced mathematics to be good software developers, math is still a fundamental pillar of both computer science and software development. Programming is just one tool in a computer scientist’s toolkit—a means to an end. It’s hard to draw definitive lines between disciplines, but here’s an attempt at an eagle-eye view of computer science as a field to build a bigger picture:

cs-fiel_640

At its core, computers are centered on the mathematical concept of logic. Fundamental math that you learn in high school or middle school, like linear algebra, boolean logic, graph theory, inevitably shows up in daily programming. Here are 10 examples of times when you might need mathematics in real-world programming today:

  1. Number theory. If you’re ever asked how one algorithm or data structure performs over another, you’ll need a solid grasp of number theory to make that analysis.
  2. Graphing. If you’re programming for user interface, basic geometry, like graphing, is an essential skill.
  3. Geometry. If you’re creating a mobile app and you need to create custom bounce animations that are modeled on springs, you’ll need geometry skills.
  4. Basic Algebra. If your boss asks you: How much user retention can we expect to grow next month if we increase the performance of our backend by 20%? This is a pure variable equation.
  5. Single Variable Calculus. These days FinTech firms like Jane Street are among the most sought-after companies for programmers because they pay well and have interesting challenges. You need to be able to analyze financial parameters to make crucial predictions to get these coveted jobs.
  6. Statistics. If you’re working at a startup and you need to A/B test different elements on a website, you might be tapped to understand normal distribution, confidence intervals, variation and standard deviation to see how well your code change is performing.
  7. Linear Algebra. Anytime you have image processing problems, recommendation engines (like Google’s PageRank or Netflix’s recommendation list), you need linear algebra skills.
  8. Probability. When you’re debugging or testing, you’ll need a solid understanding of probability to make randomized sequences reproducible.
  9. Big-O. If your company’s expanding to a brand new region, and you don’t understand the implications of a O(N^2) sorting algorithm, you could be pinged at odd hours because the expansion introduced holes in the algorithm.
  10. Optimization. Generally, anytime you need to make something run faster or perform better, you should be able to know how to get the minimum and maximum value of a function.

We’re far beyond the point of needing engineers to code simple solutions. Engineering teams at enterprises and—especially—startups have to earn the leading edge. They rely

on engineering and product teams to gain competitive advantage by investing in emerging concepts like Big Data manipulation, handle high-scale systems and predictive modeling. And they all require a solid framework of mathematics.

It’s not uncommon to hear refutations like: I’ve been a software engineer for 15 years and never used advanced mathematics on the job. But are we all really still going to be coding web and mobile apps 10 years from now?

Those Who Incrementally Exercise Mathematics Skills Will Get the Coolest Jobs

In the beginning of this piece, we considered why many educators and experts might be downplaying the importance of math in daily programming to encourage more engineers to enter the field. In order to meet the demand for engineering talent in the next 5 to 10 years, it’s clear that we need to take steps to encourage more peopleof diverse backgrounds to join the field. The BLS reportsthat computing and mathematics will make up more than half of the projected growth of annual STEM job openings between 2010 – 2020.

But this message of “you don’t have to be good at math to program” is actually fueling a self-destructive myth that’s baked into our culture today, which is: Math skills can’t be acquired: You’re either born with it or you’re not. This myth persists for at least two reasons:

One, Professors Miles Kimball and assistant professor Noah Smith have taught math for many years and say: “people’s belief that math ability can’t change becomes a self-fulfilling prophecy.” Consistently saying that you’re “not a math person” means you won’t be a math person.

Two, people perceive mathematical fields as dry and uncreative. It goes back to the oversimplification of the dichotomy between the “left brain” humanities and “right brain” STEM subjects. People who want to be more creative have more reasons to distance themselves from math.

A better way to attract more people to the field is by talking about the interesting, creative jobs that are taking over the future of software development.

In the next 10 years, software engineers aren’t still going to be limited to programming web and mobile apps. They’ll be working on writing mainstream computer vision and virtual reality apps, working with interesting cryptographic algorithms for security and building amazing self-learning products using machine learning. You can’t go very far in any of these fields without a solid mathematical foundation.

As the field of computer science is expanding, companies are going to be able to take advantage of more complex math to build software technology. Dr. Ann Irvine, principal data scientist at security software companyRedOwl, always looks for strong intuition on how to work with large datasets. And math happens to be inherently tied to this skill.

“It’s largely enabled by the fact that lots of modern computer algorithms, especially in machine learning, take advantage of very large data sets, so that enables the use of more complex mathematical models.” – Principal Data Scientist Ann Irvine, PhD

As it stands today, you don’t need much beyond basic algebra and geometry for software development in general. But software development of the future will be made up of highly specialized subfields of CS. Here’s a chart that illustrates just how fast these futuristic technologies are shifting toward the mainstream consumer market. The first row talks about the market opportunity in the next 4 years, the second row highlights the adoption rate and the final row is an indication of the job demand today:

Adoption_640

 

Focus on the Fundamentals Because Technology Will Pass Anyway

“The most valuable acquisitions in a scientific or technical education are the general-purpose mental tools which remain serviceable for a lifetime.” – George Forsythe, the founder of Stanford’s computer science engineering department.

It’s far more empowering to talk about the importance of skills that serve you for a lifetime rather than the demand for short-term tools today. Math is an unshakeable force in programming. The  core concept of breaking down problems, abstractions and finding solutions using formal formulas will never change.

In fact, academia is susceptible to a massive, inherent failure in being able to keep up with the ever changing tools that industries demand. Hisham H. Muhammad is a computer science PhD and illustrates the argument perfectly in this Tweet below. It’s interesting to contrast the years in which Hisham studied computer science between 1994-2000 with the years at which the technologies mentioned started taking off:

//platform.twitter.com/widgets.js

Screen Shot 2016-05-31 at 11.46.22 AM

There’s such an emphasis on branches of programming language and tools today that it’s easy to miss the bigger forrest. It’s better to start practicing now while there’s no significant pressure to apply advanced concepts to your work…yet. Even if it’s by solving one mathematical problem a day, you’ll be so much better equipped with tools to solve much more interesting problems down the line. Let’s stop separating math from programming for short-term relief and, instead, focus on fundamental, unchanging truths with which we’ll engineer the future.

Resources to Help Boost Confidence in Math:

    • Forget what you learned in school (memorizing theorems or trig identities won’t help you). Instead, learn to recognize problems and choose the right formula.
    • Read great books:

 

 

 

Why is Computing History Vanishing?

When future generations of computer scientists look back at the advancements in their field between 1980-2015, they’ll turn blank pages.

Historian Martin Campbell-Kelly points out that “up to the late 1970s, software history was almost exclusively technical.” Since then, traces of the technical history of computing have been vanishing. It’s a very curious case. Relative to its ancestors, math and physics, computer science is still an infant at just 75-years old. Some of the biggest breakthroughs in software technologies happened in the last couple decades, yet historians have shied away from capturing the history of computing through a technical lense.

Think about it: When was the last time you read a detailed technical explanation of a breakthrough that takes you inside the mind of the inventor? Computer science has grown exponentially in the past several decades, but recent critical source codes have gone untouched. Kelly depicts the evolution of software literature below, based on titles he found most useful since 1967. You can see that as the years go by, the emphasis moves away from pure technology to the application of technology.

Screen Shot 2015-11-24 at 10.22.24 PM

Elsewhere, museum board members of the National Cryptologic Museum have been known to criticize historians’ efforts of adequately chronicling the National Security Agency’s work on cryptography. Look at the “lack of historical context given to the recent revelations by Edward Snowden of NSA activities. Historians could be providing useful context to this acrimonious debate, but thus far we have not,” says Paul E. Ceruzzi of the Smithsonian Institution. Unlike onlookers, historians are likely numb to such controversy. After all, it’s not too different from the events in WWII’s Bletchley Park when Alan Turing intercepted communication from the Germans.

What carries even more weight is the great living legend Donald Knuth’s reaction to Kelly’s paper. “I finished reading but only with great difficulty because the tears had made my glasses wet,” he says. Knuth has noticed the trend, but believes it’s horrifying that historians are in favor of prioritizing business history over technical history.  

Then, last year, he did something he hasn’t done in years. Knuth momentarily stepped away from Volume 4 of his epic series The Art of Computer Programming, poked his head out of his hermit shell, and devoted last year’s lecture at Stanford to: Let’s Not Dumb Down Computer Science History.  

Knuth sparks a fascinating debate–one worthy of further exploration. Why are historians overlooking the technicalities of today’s breakthroughs in computer science? And how will this trend impact future generations of computer scientists?

Tracing the Missing Pieces

Since the invention of computers, historians used to be knee-deep in the technical trenches of computing. There’s plenty of analytical literature on the likes of the ENIAC, Mark I and early IBM computers. But come the pivotal 80’s—when the personal computer started proliferating in homes—historians shifted their focus onto software’s broader economic impact. They’re covering things like funding (here) and business models (here). Shelves are filled to the brim with books on how tech giants and unicorns are revolutionizing the world.

But what about its technologies? Have historians looked inside the black boxes of recent breakthroughs, like:

  • [1993] R programming language, which statisticians and data scientists depend on to create reproducible, high-quality analysis
  • [2001] BitTorrent, the peer-to-peer file sharing protocol that mandates about half of all web traffic
  • [2004] MapReduce, which has been invaluable for data processing

Trained historians have yet to place many of these revolutionary inventions under a historical microscope. No one is contextualizing how these advancements came to be and why they matter. So, what happened? The answer is a prism with many faces.

As Knuth notes in his talk, there’s little incentive to study history of computing for scientists. It’s completely respectable to write a historical dissertation in biology, mathematics or physics. But it’s just not the case for computer science. In fact, history departments within computer science departments are rare–if at all in existence— in America. At best, it might be masked under “other” specialty for PhD candidates:

Screen Shot 2015-11-28 at 12.04.38 PM

So, who does that leave us?

ACM published this infographic depicting the state of computer science history today. You can see it’s mostly a secondary interest for a subfield of history or science:

Screen Shot 2015-11-29 at 11.50.25 AM

Historians of science are usually specialists within a broader history department, under the humanities umbrella. So, it follows, the accounts from non-technical historians will always be less technical than that of programmers. The onus is also on computer scientists to write the technical history that lives up to the caliber of Knuth.

Even if you decide to embark on computing history, historians will cast a wider net in reaching audiences by writing about software’s impact on business, society and economics. Naturally, technical articles are only valuable to a tiny slither of scientists, yielding limited financial support.

“When I write a heavily technical article, I am conscious of its narrow scope. but nonetheless it is a permanent brick in the wall of history. when I write a broader book or article, I am aware that it will have a more ethereal value but it’s contributing to shaping our field,” Kelly writes in response to Knuth.

When Kelly wrote technically-heavy pieces, filled with jargon and acronyms, his esteemed colleagues’ told him his view was too narrow-minded. For instance, Kelly wrote about the EDSAC in the 1950s. But critics said he neglected to include its fascinating uses:

  • Generated the world’s highest known prime number
  • Created a stepping stone to the discovery of DNA by Watson and Crick
  • Reduction of radio telescope data, a crucial process in radio astronomy

Studying the byproduct of computing is undeniably valuable. But when it comes to the technical discoveries that lead to these technologies, we have only darkness.

Furthermore, computer science historians’ job prospects are severely limitedit’s either academia or museum work. PhD in other computer science specialties, on the other hand, have high-paying options as researchers in R&D labs of Google, Facebook, Microsoft, etc. You’d be hard-pressed to find someone who built their career on computer science history.

Alternatively, the eclipse of technical history could be a byproduct of the secrecy of government agencies. In the earlier NSA example, for instance, historians have the additional hurdle of declassifying projects. This has been an ongoing larger hurdle since the days of top-secret missions in WWII, when bomb simulations generated many of the major developments in computer science.

Another reason for the lack of technical history of software, could be the volatility of the discipline. It’s hard for historians make definitive claims without arriving at false conclusions when the field changes so fast. Just look at this piece on what’s worked in computer science since 1999.

 

Screen Shot 2015-11-28 at 7.39.14 PM

Concepts that were considered impractical in 1999 are unambiguously essential today. It’s risky for historians to make definitive claims when the field can shift in just a 10-year window. It’s like figuring out where to jump on a moving train.

Finally, the sheer exponential rate of growth doesn’t help either. The train is not only getting faster but also longer. You probably know the widely cited projection from the Bureau of Labor Statistics, which says that computer science is the fastest-growing professional sector for the decade 2006-2016. The percentage increases for network systems analysts, engineers and analysts are 53%, 45% and 29%, while other sciences (like biological, electrical and mechanical engineers) hover around 10%.

To top it off, look at the growth in the total number of open source projects between 1993 and 2007. We’re amidst a paradigm shift in which much of today’s pivotal software is free and open. This research, by Amit Deshpande and Dirk Riehle of SAP Research, verifies that open source software is growing at an exponential rate. Michael Mahoney puts it best when he says: “We pace at the edge, pondering where to cut in.”

Screen Shot 2015-11-28 at 9.15.15 PM

Donald Knuth: This is a ‘Wake Up Call’

But this shift toward a more open, less patented software world is all the more reason for this to be a wake up call for computer scientists and historians alike. As source code becomes more open, we unlock barriers to history.

Knuth sets the example with the mind of a computer scientist and the zeal of a historian. As he entered the field in the 1950s, his detailed history on assemblers and compilers set the bar high. Still today, his Art of Computer Programming is critically acclaimed as the best literature for understanding data structures and algorithms. He practiced what few grasp today: Without technical history, we can never truly understand why things are the way they are.

We need in-depth history to learn how other scientists discovered new ideas. Reading source code is something most legendary programmers do to become a better programmer. As a kid, Bill Gates infamously dumpster dove to find the private source code for TOPS-10 operating system. Being able to see inside a programmer’s mind as he unraveled a complex knot can help teach us how to solve our own problems. It’s how any new breakthrough materializes fast. When Brendan Eich, for instance, set out to create Javascript in 10 days, he needed a strong foundational knowledge of existing languages to built upon. He pulled structs from  C language, patterns from SmallTalk and the symmetry between data and code offered by LISP.

Even more importantly, history uncovers valuable lessons from failures. If all we have is a highlight reel, future generations might arrive to false conclusions about the expectation of success. They won’t be able to see the false starts that lead up to the “aha!” moment.

Likewise, when William Shockley, John Bardeen and Walter Brattain attempted to execute on the theory for a solid-state replacement for a vacuum tube, they went through months of trial and error. There should have been a change in current when placing a strong electrical field next to a semiconductor slab, but it didn’t happen. There was a shield that deemed electrons immobile. They tried several techniques, like shining light, drops of water and specks of wax to charge electricity. Eventually, they successfully amplified the current, and introduced transistors to the world. But learning from the team’s initial failures can teach us more about what exactly transistors are and what they aren’t.

We’ve only just started learning what’s possible in the computer revolution. The sooner we document and analyze these formidable years, the brighter our future will be. The acclaimed computer scientists like Shockley, Brendan Eich and Donald Knuth should be as well known as mathematicians Albert Einstein, Isaac Newton and Rene Descartes. This is not to say that historians’ current efforts in contextualizing the impact of computing has been wasted. Undoubtedly, the new field is infiltrating every industry, and this analysis is important. But, for the sake of future computer scientists, we need both breadth and depth in computing history. Computer scientists and trained historians must work together to leave a holistic imprint of today’s pivotal advancements to truly fill the pages of computer science history for future generations of hackers.

 

Have you noticed that there aren’t as many strong technical historical analysis of computing and computer science? How can we fill this void?

 

 

 

Can Programmers Change the Government?

There will come a day when “FedEx Sends Package via Space” will blow up our augmented reality lenses. But which entity will achieve such a feat: Private or public?

Which would you bet on: SpaceX or NASA?

Over 50 years ago, NASA was cemented across 10 distributed centers nationwide. Compared to nimble startups like SpaceX–where engineering, design and development is central—building rockets at NASA is relatively less efficient. SpaceX invites the exciting possibility for low-cost access to space, and it’s why some NASA employees are actually rooting for SpaceX to succeed.

“In theory, NASA could then turn its attention to riskier space exploration goals where there may be no adequate return on investment for a private company,” says NASA’s Jason Hutt.

Notorious for long list of regulations and layers of management, NASA–along with all other hundreds of US federal agencies–are trapped by antiquated processes that are too massive to change. If the goal of the government is to better serve the public, the relationship between public and private has to be symbiotic. 

From the US department of motor vehicles to human services, federal agencies have been oil to innovation’s water. But over the last few years, there’s been a stronger demand for innovation in the government. Never has there been a bigger emphasis on improving technology as there has been in the White House today. Whether by partnering with stealthy startups or establishing strong in-house engineering teams, this could be the transformational period you read about in history books. Software engineers could be the cornerstone in revolutionizing the government for the greater good.

The Demand for Change Now

It’s crazy to think that landing a government job was once considered a massive badge of honor. The stability…the longevity…the pension. These are all ideal relics of the past. With the rise of the technology revolution, new grads’ interest in working for the government has unsurprisingly been declining over the past four years. Of roughly 46,000 undergraduates polled in late 2013 and early 2014, just 2.4% of engineering students listed government employers as their ideal places to work.

But, new grads should consider this outlook:

“The greatest threat to humanity is government’s incompetency.” – Sam Altman, president of YCombinator at 2015 TechCrunch Disrupt.

In these 8 words, Altman declares a significant predicament that’s surfacing in the halls of tech conferences across the country (here, here and here). People in Silicon Valley are disrupting whole industries at uncharted rates, building billion dollar companies in just a few years. Meanwhile, the government has been trailing decades behind. This begs the question: How much is technology helping the general public?

The biggest catalyst for this realization was undoubtedly the Healthcare.gov disaster of 2013. When the Affordable Care Act federal insurance exchange website went live, it virtually failed when millions of Americans attempted to sign up. The whole website went down and came back with frustrating glitches and ridiculous load times. It highlighted the truck loads of government dollars wasted on malfunctioning technology and signified the incompetency of the government.

Screen Shot 2015-10-06 at 12.11.23 PM

sources: [1], [2], [3], [4]

Granted, the complexity of its massive infrastructure that handles sensitive, regulated information is incomparable to building an average website. Still, the lack of efficiency is undeniable.

But this mishap propelled progress. Since this embarrassing negligence, not only has the White House publically recognized the problem but also:

  • Appointed the first ever Chief Data Scientist DJ Patil to help make data more accessible, with a focus on healthcare. He’s spearheaded Data.gov, which made 130,000 datasets available to the public. Notably, he’s pioneering something called “precision medicine,” which looks at personal health data to predict diseases.

Patil is often pressed about why he left his cushy throne in Silicon Valley, where he had influential positions at eBay and LinkedIn:

“There’s something exceptional when meeting at the government and the White House. You’re sitting where WWI and WWII and The Marshal Plan was made and implemented. Every single moment of every day, you’re creating history when you’re there. You have the ability to change the world. There is no other meeting…this is the meeting where decisions are made.” – Patil says.

  • The Obama Innovation Fellowship is now permanent. In this program, about a hundred “entrepreneurs-in-residence,” or technologists, from Silicon Valley observe and infuse innovation in the public sector.  
  • There’s now a “Department of Better Technology” that was established in 2013, founded by Clay Johnson (who ran Obama’s website). So far, they’ve produced Screendoor, an online form app…among several other products…for nonprofit and government purposes.
  • There’s now a Social & Behavioral Sciences Team that’s making tweaks in governmental processes using A/B testing methods to boost efficiency.

It’s amazing that even simple changes, like replacing reminder mail with text messages, has the potential to change thousands of people’s lives. For instance, researchers sent one group of randomly chosen students a text message reminding them of the next steps for college enrollment. “A full 68 percent subsequently enrolled in college, compared with 65 percent among those who didn’t get any reminders,” the New York Times reports. So, a mere series of text messages–that cost just $7 dollars per student–helped put more students on the path to college.

There’s so much low-hanging fruit that can make a significant difference at low-cost.

Another example is the bane of every government employee and citizen: Paperwork. It takes an average of 2 weeks to process a single form and 9 billion hours are spent processing forms a year. Talk about government waste. This is coming from SeamlessDoc, a startup that aims to virtualize forms.

SeamlessDoc is a reason to be optimistic. It’s a product of GovTech Fund, the very first venture capital firm exclusively for government startups. With $23 million in funding, startups like SeamlessDocs are saving millions by digitizing government forms.

“In the next five to 10 years, you’re going to see a wave of capital coming to the space,” Managing partner Ron Bouganim said. “From the Govtech perspective, we’re building an ecosystem.”

The Irony of Archaic Government Software

As you can see, there are glimmers of light shining down on the darkness of archaic governmental inefficiencies. Startups can get US government contracts–like that of SpaceX–but there’s a lot of ambiguity and frustration in the process of even applying.

If you want to carry out a contract for the government, behold…here’s the screen you’d have to navigate:

Screen Shot 2015-10-06 at 12.13.38 PM

Sam.gov is both the symptom and cause of the sluggishness of innovation in the US government.

Johnson points out that it cost $200 million so far to make–hard to believe considering it’s reminiscent of the days of Dial Up Internet.

It all comes down to the way the government hires programmers to create this software. It’s ironic– If you want to come in and build software for the government, you have to use the government’s existing crappy software. It’s a self-perpetuating cycle of anti-innovation. Anyone with the drive and ambition to fix these sites would have no patience to navigate through the initial website. Johnson gives us the bitter taste:

“They’d issue a request for proposal of 30 pages long, which requires you to register on  Sam.gov. You have to guarantee you’re not a terrorist. California makes you guarantee
that you don’t own any slaves. The first field at Sam.gov is ‘what’s your Duns number?’ It’s a proprietary number owned by Dun & Bradstreet. You have to apply for a Duns
number, which takes 1-2 business days.”

And this is just the first question.

“If you’re a young startup company and you make websites, and you want to make a government website, you have to come to this website. This regulatory environment is so huge and requires a real skill to understand that the people who win the contracts are people who often times understand the regulations the best–not the people who understand the technology the best,” Johnson says.

Johnson goes on to make a fair point: It’s not necessarily that the companies who built Healthcare.gov and other government technology do shoddy work, it’s that the frustrating environment makes it impossible to do your best work. There are reports of Johnson’s company “Department of Better Technology” aiming to fix the bidding process with Procure.io, but the URL has since then redirected to the ScreenDoor, another online form product.

The New Frontier Must Be a Tandem Effort

While the administration has certainly taken great strides with the appointment of Silicon Valley veterans since the Healthcare.gov gaffe of 2013, the root of the problem is clear. The barrier between projects or job opportunities and the best software engineers is massive–to the point of absurd. 

The government needs to revamp the way they hire programmers to build software.

Likewise, software engineers should change their vantage point from the government as a sluggish beast to an opportunity for true change. It’ll take both ambitious software engineers with patience to break through the system and a more progressive government to loosen the friction between innovation and public service to create a better life for the average person. The most forward-thinking tech moguls, like Elon Musk and Jeff Bezos, dreamt of space travel as kids. It’s why they’re devoting a slice of their fortune to working in tandem with NASA to send tourists to Mars.

What if this mentality permeated across gov agencies? What if we had innovative startups competing for government partnerships to fix the DMV, food stamps and services for the homeless?  It might not be as glamorous as space exploration, but it could improve the lives of millions on Earth.

 

 

Tech’s Loophole in ‘Years of Experience’

This article originally appeared in Forbes.


Time is arbitrary–a relative school of thought. Ancient Roman civilizations looked at sun and moon cycles and decided that just about 365 days would make up one year. Newton said time is absolute, after which Einstein theorized its relativity. The existence of multiple scientific notions of time itself is proof that time is not real.

The quantification of time was conventionalized centuries ago. Time is not only imperfect but also permutable. The Daylight Savings Act of 1976 exemplifies this perfectly. Benjamin Franklin declared that we should simply move the first hour of light to the evening during the fall to boost productivity. With the mere pass of a law, Americans altered their perception of time forever.

Time and space are modes by which we think and not conditions in which we live.”

— Albert Einstein.

Time is not real and yet the number of years in a particular job or skill often determines your job opportunity and–potentially–your entire career trajectory. When you really scrutinize why organizations filter by years of experience in job descriptions and what this reveals, it’s hard to believe that it’s still a primary factor for hiring today in tech.

Given the prolific concern of a skill gap, the imperfect illusion of time spent on a skill shouldn’t be a top factor in hiring in the tech sector. When it comes to solid experience, it’s not the years that predict great performance but, as Vinod Khosla advised at TechCrunch Disrupt, “it’s the rate of learning.”

The Loophole is Real

Speculation of agism is well-documented in the technology sector of Silicon Valley. Some point to anecdotal “frat bro culture” of startups (here), while others point to the pattern of successful young co-founders (here). While one-off instances of age discrimination lawsuits are undoubtedly overblown in the media (like that of Google and Twitter), the average median age in tech is a stat that’s hard to ignore. PayScale did a study to find the median age of tech workers, and found that just 6 of the 32 companies it looked at had a median age greater than 35 years old.

pic_1

Informationweek did a survey of tech workers and found that a 70% said they’ve witnessed age discrimination. This has even caught the attention of the EEOC, the government agency in charge of discrimination laws:

“Some of our offices have made it a priority in  looking at age discrimination in the tech industry,” EEOC senior counsel Cathy Ventrell-Monsees.

If hiring based on age is illegal, why is there such a homogenous culture when it comes to age range? The loophole is in the years of experience. Regardless of what age you are, if you don’t meet the requirement, your resume might never reach the hands of hiring managers in tech. It’s why lawyers, for instance, have pointed out that requiring people with “Native Digital” experience is teetering on the edge of age discrimination. It implies that you have to have been born in the digital age, Fortune reports.

pic_2

It’s a flawed process. Hiring managers typically write down a ballpark range of minimum, let’s say, 3-5 years of experience in a particular skill. Usually, recruiters run with this range and subscribe to the idea that since Scott has 20 years in COBOL, he probably doesn’t know about cutting-edge tech, like Swift or Block Chain. But it goes both ways. If John spent 5 years working on Java, he’s considered more qualified than Jill, who only spent 1 year learning on the side. And so this arbitrary notion of time spent in a job helps create a hard-to-prove loophole to filter people by years instead of pure skill.

Today ‘years of experience’ is one of the top filters that companies use to cut through high volumes of prospects. Just look at the Premium feature of LinkedIn, the most-used recruiting tool by hiring managers and recruiters. It’s among the first filters that recruiters see on the left module.

Screen Shot 2015-09-28 at 12.40.32 PM

Some recruiters have alternative techniques. A blogger dubbed “Boolean Blackbelt” uses this boolean search to find people who graduated in 2004, for instance:

site:linkedin.com -dir (java | j2ee) -recruiter (engineer | consultant | programmer | developer) “location * Greater Atlanta”  “(“BA” | “B.A.” | “BS” | “B.S.” | “Bachelor” | “Bachelors”) * * * * * * 2004″

To top it off, McGraw Hill textbooks like “Start Your Own Business,” author actually capitalizes “PREVIOUS EXPERIENCE” in its guidelines on how to write a great job description. Really, why all of this emphasis on years of previous experience?

The Flaw is Inadvertently Stuck in the Process

A common perception is that because the nature of technology is fast, younger people are stereotypically more adaptable. But many reports underscore an accusatory connotation against companies. For instance, when Google and Twitter were under fire for age discrimination lawsuits earlier this year, repeated reports surfaced the infamous Mark Zuckerberg quote back in 2007: “Young people are smarter.” With each age discrimination lawsuit in tech, that soundbite gets another jolt of life (here, here and here).

But it’s generally not deliberate. The root of this homogenous range of age is two-fold. First, such classification based on stereotypes (re: the Scott example) is sociologically human nature, as PhD Robert B. Cialdini says in Influence.

“We can’t be expected to recognize and analyze all the aspects in each person, event, and situation we encounter in even one day. We haven’t the time, energy, or capacity for it. Instead, we must very often use our stereotypes, our rules of thumb to classify things according to a few key features and then to respond mindlessly when one or another of these trigger features is present. – Cialdini.

For instance, one common stereotype against older workers is they have slower cognitive abilities than the youth. It’s just not true. Scientific evidence reveals that older workers aren’t necessarily less cognitive nor are they any less creative.

Second, this long-standing qualifications has been used to narrow down job applicants for centuries. According to one 1901 journal, engineer Frederick W. Taylor, one of America’s first management consultants, first crystallized the idea of analyzing job ads to write better, standardized job descriptions in Shop Management. He created a list of the most common attributes for each profession to be able to find similar people who’d succeed in the role. Sounds pretty logical for the 1900s.  

By the end of 1917, most managers in the country adopted this boilerplate. It just so happens that previous experience was included in checklist. Until the Age Discrimination in Employment Act of 1967, people used to actually specified the age in job descriptions.

Changing the Syntax, Closing the Loophole

It’s not the length of time we’re looking for, but the rate of learning and substance of achievements of which they’re proud. The loophole is a matter of poor syntax–a steel oversight that withstood the passage of time and innovation.

Opponents might argue that, for some senior level jobs, you need specific experience that can only come with time. You need to be able to see projects through; see the ramifications of your decisions.

While these are all valid points, it’s still not the correct syntax. Again, the maturity and seniority that comes with time can be evident by what you’ve achieved and learned. Some people are ahead of their years, while others grow to remain emotionally stunted. Years in a job–alone–aren’t enough to prove that you learned something in those years. Dan Parker runs a coding boot camp, Code Fellows, in Seattle puts it well when he says: “Regretfully, 10 years of experience can also mean 1 year of experience done 10 times.”

Pic_4

Take Kari Tarr, for instance, who was dead-set on switching careers from finance to engineering at Airbnb. She put in late nights of hard work to learn coding on the side. She wrangled friends to use CodePair to watch her code and help her progress. But when she first approached the high-growth startup’s engineering team to express her interest, they were all extremely hesitant. It’s understandable. Airbnb is a billion dollar startup with an extremely high bar for hiring. How can they let someone–even one of their own–come aboard with no experience?

“They were worried they might have to spend too much time showing me how to carry out specific tasks,” Tarr says. “I expected them to be worried.”

But the challenge was on. She rolled up her sleeves and made it a targeted goal to prove herself to the Airbnb engineering team. Tarr did this by strategically choosing projects and keeping an eye out for opportunity.

“If something needed to be automated, I’d volunteer for it. I looked for opportunities that would force me to get exposure to our code base,” Tarr says.

And the engineering team watched it all happen. A year later, they could see her progress and were convinced she’d be a great addition to the team. It didn’t matter whether or not she fell into the right bucket of “years of experience.” Her rate of learning was through the roof.

Plus, the effervescent nature of technology means that the skills you learned last month could become irrelevant tomorrow. Compared to other industries, software engineers generally don’t have as much experience because the field is relatively new. StackOverflow, for instance, pointed out that:

40% of doctors have at least 10 years of professional experience in the US while only 25% of developers have at least 10 years of experience.

The most in-demand programming languages today, according to IEEE, were created just 20-30 years ago. This list doesn’t include newer cutting-edge programming favorites, like Ruby on Rails (10 years old), Go (6 years old) and Swift (15 months old).

Pic_5

And it can be cyclical too. COBOL programmers are stereotypically outdated, but there’s a strong case to prove that COBOL will make a comeback within the next few years. Software engineers are serial learners. There’s a study that finds that, in general, 6-months is usually the amount of time it takes to pick up new tools for professional programmers. If you filter by skill and focus on how fast people learn, you’ll open up a flood of talented engineers.

In turn, David Heinemeier Hansson, author of Ruby on Rails, suggests that engineering candidates should actually view qualifications like “3-5 years of experience in Ruby” as a red flag.

It’s really time we reevaluate how we’re measuring experience. Years are not a measure of knowledge because everyone learns at a different rate. Rather than succumbing to overgeneralizations of entire generations, and risking age discrimination, a better filter is the rate of learning–focused purely on skill.

Will you eliminate “Years of Experience” as a primary filter to narrow the field?
Tell us if you agree or disagree in the comments below.

 

 


 

Girls Who Code: You Can’t Live Without Female Software Engineers

We all know the sobering stats: Only 16% of the US’s 3.1 million software engineers are women.

What we don’t emphasize often enough is that even though women make up a dismal fraction of software engineers, their influence is extraordinarily pervasive. They’re pioneering many cutting-edge technologies. SpaceX’s Amanda Stiles, for instance, doesn’t let naysayers keep her from simulating space exploration by building the F9, Dragon and Crew operations. Meanwhile, Microsoft’s Dona Sarkar is busy helping to build the first untethered hologram computer. How can we not mention Mary Lou Jepsen who is leading the next frontier of virtual reality at Facebook-owned Oculus.

The influence of female software engineers is felt all the time. In fact, we bet you can’t even go a day–heck, a few hours–without feeling the influence of female software engineers. From the very moment you wake up to the time you go to sleep, there’s a resilient female engineer who helped create the daily apps and technologies you touch.

Here’s an illustration, showcasing women behind popular, powerful technology we use hour by hour:

you-can-t-live-without-female-engineers (1)

When you take a look at this powerful infographic, it’s actually really disturbing to see sensationalized headlines like: “Why are there so few female leaders in tech?” amplified in the media. (Other similar headlines here, here and here). There seems to be a new article every day that claims to pinpoint the core problem resulting in few women in tech. From amiss gender stereotypes to unconscious bias during interviews, there are a myriad of reasons why there are such few women in engineering. But it’s a nuanced problem, and discussing the many reasons for it–alone–can be futile.

Our dialogue should also emphasize illuminating kickass women achieving amazing things. Female software engineers have immeasurable influence on the world today, prevailing the lack of diversity or hurdles of Boys’ Club that stand in their way. As Sabrina Farmer, a Google engineering manager who heads up Gmail, says:

“I’m not Superwoman, and my job is hard, but the pay is good and the perks are incredible, I have a great career and family, and the world has changed because of technology — and I’ve been a part of it.”

If all we ever talk about are the sobering percentages of few women in tech, our new generations of young girls might grow up believing that women aren’t as influential in building technology we depend on. In reality, with or without public recognition, women have always been the cornerstone of modern technology. Historically speaking, coding runs deep in females’ blood.

Women Have Always Been the Underdog Heroines of Programming

The first programmers were women. Those familiar with female influence in tech most often point to two names: Grace Hopper and Ada Lovelace. Like all female software engineers, both Lovelace and Hopper brought a unique, essential perspective that helped grow and shape modern computing. They offer diverse opinions that encourages open communication, empathy and analytical thinking. In the 1800s, Lovelace, for instance, worked alongside Charles Babbage, the first person to conceptualize a programmable computer. One expert analyzed the letters exchanged between Lovelace and Babbage and found that: “Lovelace and Babbage had a very ‘different qualities of mind’…whereas Babbage focused on the number crunching possibilities of his new designs, Lovelace went beyond number-crunching to see possibilities of wider applications.”

It was Lovelace, for instance, who suggested that the Analytical Engine could be used for more than just numbers. Just look how beautifully she describes the revolutionary punch card mechanism:

“We may say most aptly, that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.”

About a century later, after Pearl Harbor, the Navy recruited Hopper to build computers. She was a leader in furthering innovation computer development field. As author Kurt W. Beyer points out, her pivotal mark was her advocation for open source ideology and inter-operational computing languages, versus closed-source protection by intellectual property law:

“‘It is in our earnest plea,’ she wrote to ACM, ‘that we receive comments both mild and violent, suggestions and criticisms, as soon as possible.’ By broadening participation during the development phase, Hopper increased the odds that the computing community would freely adopt the resultant language.’

Today Hopper is lovingly referred to as the mother of COBOL, as the first to push for a programming language that was readable as English instead of computer jargon.

But these two programming heroines are just the tip of the iceberg.

In the 1940s, historian Nathan Ensmenger explains that most people generally thought that software programming was “women’s’ work,” consisting of plugging in numbers and shifting switches (like secretarial filing). Hardware engineering was considered more manly, according to the Smithsonian. But the field was so new. People didn’t realize that these women would rise to the occasion and become the cornerstone of computing.  They’d even become experts on how to improve functionality and solve tough programming tasks.

computer_girls

As the war went on, the demand for smart mathematicians to calculate weapon trajectories using computers grew. The US military recruited women as “Math Rosies” for ballistic research. Some women even went on to work on bigger machines, like the ENIAC.

Jean Jennings Bartik was among six women who helped build the ENIAC, and were mentored by John Mauchly. Men may have built the hardware of such machines, but it was women like Bartik who laboriously debugged every vacuum tube and learned how to make the 6-foot machine work–sans books or even chairs. Plus, the mission was secret, there wasn’t much opportunity for public recognition for their work. It was so secretive that they couldn’t even see the computer they were working on until security clearance came through. Still, even after the ENIAC was finally announced:

“They all went out to dinner at the announcement,” she says in a talk at a computer history museum. “We weren’t invited and there we were. People never recognized, they never acted as though we knew what we were doing. I mean, we were in a lot of pictures.”

Interestingly enough, here’s how renowned Grace Hopper pitched programming to women in Cosmo magazine, circa 1960s:

“[Programming is] just like planning a dinner. You have to plan ahead and schedule everything so that it’s ready when you need it…. Women are ‘naturals’ at computer programming.”

It makes sense — she wanted to appeal to the maternal side of women, since that was most valued at the time. But, in reality, Hopper was aiming to fill a huge demand to work through complex problems by smart women with an aptitude for math and numbers. Unfortunately, these female programmers or “Computer Girls” never really got the credit they deserved through time.

It’s Hard to Be What You Can’t See

Whether or not the world recognizes it, smart, ambitious women have always been and always will be the cornerstone of computing. From its inception to new frontiers, software requires diversity from its engineers. Among immeasurable impact, female software engineers can help open up lines of communication, broaden viewpoints and bring a level of creativity and empathy that’s essential to innovation. Let’s stop focusing only on the bleak diversity numbers, and start highlighting the empowering stories of triumphant women who pioneered innovation–not just for the recognition–but to help shape our tech-driven world today. As our very eloquent HackerRank Ambassador Anjan Kaur says:

“Software engineering is too interesting to leave just to men.”

 

Want to take action to help bring more women to engineering? Get involved.

Women’s Cup is an all-woman online hackathon happening October 10th. Developers will solve coding problems to challenge themselves, develop their skills and win prizes! Companies who sponsor the event will obtain a list of the top female engineers at the end of the contest.

Join the cause. Join Women’s Cup now.

 

 

 

Image sources: [1] [2] [3] 

Legendary Productivity And The Fear Of Modern Programming [TechCrunch]

This article originally appeared on TechCrunch


 

JavaScript master Douglas Crockford once said that software is the most complex thing that humans have ever created. It’s made up of intricate bits and bytes pieced together like virtual puzzles. These insurmountable calculations help us achieve extraordinary feats, like routing human beings to the uncharted craters of Mars. By the very nature of the burgeoning computer science discipline, “we’re dealing with things that are on the edge of what humans can handle and more complicated than ever before,” said renowned computer scientist Donald Knuth.

While no one programming legend can possibly accomplish any big feat solo, there are programmers worthy of fame for their supreme productivity. Every so often, leaders of new revolutionary tools make an explosion in the field that reverberates across generations of new programmers.

But what’s even more interesting is that some of the highest-achieving programmers — who can make sense of such unfathomable complexity — can’t foresee a lucidly bright future ofprogramming. Several accomplished computer scientists share a grave concern of the shift toward our more fragmented web. Tools that act as layers, like frameworks and packages, are created to help programmers be more productive, but some experts fear they’ll actually have the opposite impact long-term.

If the foundation of the modern world is built on software, then deconstructing the toolkit of today’s software leaders can help us not only become better programmers, but develop a better future. Contrary to popular belief, greatness isn’t exclusive to unreal legends. Culture critic Maria Popova puts it most eloquently when she says, “Greatness is consistency driven by a deep love of the work.”

After researching stories on and conducting in-depth interviews regarding seven programmingpioneers, from computer scientist Donald Knuth to Linux’s Linus Torvalds, we uncover productivity patterns common to achieving greatness and pitfalls of which to steer clear: There’s never been a widely used programming tool that was created by just one lone wolf in a cave. Sure, Jeff Dean is the trailblazer of the famed distributed computing infrastructure upon which Google towers. Peter Norvig may be immediately associated with JSchema. David Heinemeier Hannson’s pride and joy is the Ruby on Rails framework. But each of these creators had support for their groundbreaking inventions.

Teamwork is the foundation of an empire that lone wolves simply can’t sustain. It’s why Google, home of world-renowned engineers, doesn’t tolerate lone wolves on campus. It’s the antithesis of “Googliness,” and software development in general, for two core reasons.

First, the mere proximity to other engineers fuels greatness. When Rob Pike worked at Bell Labs, well before making waves on the Unix team, he recalls fond memories of hovering around clunky minicomputers with terminals in a machine room in the Unix Room. “The buzz was palpable; the education unparalleled,: he said. “The Unix Room may be the greatest cultural reason for the success of Unix as a technology.”

Folks like Ken Thompson and Dennis Ritchie (authors of C Programming Language) would code, sip coffee, exchange ideas and just hang out in the Unix Room. It was this necessity of convening in a physical room that helped turn Unix into what it is today. Since the proliferation of PCs, however, proximity to each other isn’t as rigid in modern programming. Going out of your way to meet with smart engineers, however, is a timeless essential contributing to greatness.

Just ask Jeff Dean, the famed Googler who often is referred to as the Chuck Norris of the Internet. As the 20th Googler, Dean has a laundry list of impressive achievements, including spearheading the design and implementation of the advertising serving system. Dean pushed limits by achieving great heights in the unfamiliar domain of deep learning, but he couldn’t have done it without proactively getting a collective total of 20,000 cappuccinos with his colleagues.

“I didn’t know much about neural networks, but I did know a lot about distributed systems, and I just went up to people in the kitchen or wherever and talked to them,” Dean told Slate. “You find you can learn really quickly and solve a lot of big problems just by talking to other experts and working together.”

Second, every great coder, with however pristine a track record, must check in their code for a peer review. Pivotal Labs, the company behind the software-scaling success of Twitter, Groupon and a dozen other high-growth Silicon Valley startups, requires the freedom for any coder to look at any code. It’s simple: If code is too dependent on one person, your business is susceptible to dire wounds.

Even the great Guido van Rossum checks in his Python code, and Thompson checks in C code. Linus Torvalds, author of the biggest collaboration Open Source Software project in history, says he actually judges coders — not by their ability to code — but how they react to other engineers’ code. As a bonus, reading others’ code also can help you become a better coder. You often can see things in the source code that are left unsaid in the office.

Knuth would wholeheartedly agree. “The more you learn to read other people’s stuff,” he said, “the more able you are to invent your own in the future, it seems to me.”

When Collaboration Goes Awry

Collaboration is generally a solid learning tool, but can be catastrophic when injected at the wrong time. Hansson has seen too many Open Source Software (OSS) projects fallen victim to premature collaboration. That’s why, for the first 1.5 years of RoR’s life, Hansson had commit rights to the framework. And it took another 4.5 years before he felt comfortable collaborating with another framework to produce RoR version 3. Even then, it wasn’t easy for him.

“I’ve seen plenty of open source projects being pulled in a thousand different ways because you allow collaboration to happen too early before the culture and vision is established enough so that you can invite enough people safely into the project without them spoiling it,” he says in a Developer’s Life podcast.

Plus, the truth of the matter is, many programmers simply enjoy blasting through code alone. Solo coding is faster, short-term. You don’t have to worry about communication mishaps. And things are generally more uniform. Engineer Ben Collins-Sussman once asked a room full of programmers:

  • How many of you work solo? …Crickets…

  • How many of you LIKE to work solo? Nervous laughter and raised hands spread across the room.

Collaborating is a necessary evil for many of the greats, like Norvig, who is now Director of Research at Google. Norvig makes the observation that too much collaboration is also not as effective. “If you’ve got two good programmers,” he says, “it’s better for them to work independently and debug each other’s work than to say we’ll take 50% hit just for that second set of eyes. 10% of the time it’s good to sit down and have that shared understanding. But I think most of the time, you’re not going to be as effective.”

For collaborating on a project that lacks a clear vision, unlike that of Hansson, it’s good to figure out what the problem is together. Once you have an idea, great programmers divvy up the work; leave them alone to burn through the code and sync up during review. The best collaboration happens by creating a solid feedback loop, where you can catch one another’s errors before you’re too far into the project.

John Carmack, known for creating Doom, was also very wary of adding more programmers to his process of achieving his vision of great games. There was a time when Carmack wrote the majority of the basic program, he says.

Now, we’ve got lots of situations where, if something is not right, it could be like, “Oh that’s Jan Paul’s code, or Jim’s code, or Robert’s code.” It’s not so much a case where one person can just go in and immediately diagnose and fix things. So, there is a level of inefficiency.

Again, there’s a limit to what a lone wolf can do. Doom 3 required a lot more features, which meant more programmers. Carmack found this to be a double-edged sword. So, it’s manageable, he says — and necessary — in achieving a grander vision.

Jack Of All Trades And Master Of One

Most — if not all — of the seven legendary programmers had at least one thing in common: They mastered one well-defined domain, primarily using one language to carry out their work. Jonathon D. Tang pointed this out in a recent Hacker News comment, and it rings true. Knowing your language inside out is key to optimal performance. So, “read the reference, read books that not only show you the mechanics of the language but also debugging and testing,” Norvig advises. Being able to memorize the common calls for your language of choice can help expedite productivity.

Screen Shot 2015-09-14 at 11.22.54 AM
But it can get really comfortable (or boring) really fast if you stick to just one set of tools and domain. So, after mastering one definitive domain, most of them have experience in switching contexts, which helps them open their minds and see things in a different way.

For instance, John Carmack switched from creating video games to Oculus for virtual reality. LIkewise, Rob Pike moved from Plan 9 to Go. Andy Hertzfeld transitioned from writing the original Mac system software in assembly to writing the Google+ Circle Editor in JavaScript. “But rarely do the greats ever try to juggle multiple platform at the same time,” Tang said in a follow-up email. Delving into new domains after mastering one helps you see things in different levels.

Visualizing The Program In Your Brain

Nearly every programming legend points to the importance of visualizing solutions and being able to hold programs in your head. There’s a lot of productivity lost when you dive into code or start testing without first developing a mental model of the solution.

Carmack’s ability to hold gaming concepts is among the most remarkable, considering just how massively complex the virtual world of gaming is. “A video game’s program is actually more complex than that of space shuttles sent to the moon and back,” he said in an interview. Paul Miller helps put his work into context. When manipulating light beams to create a virtual world, he renders an image at least 30 times a second to prevent the game from breaking. This adds up to a trillion calculations per second. Meanwhile, Disney’s Pixar takes 10 hours to render a single frame. In video games, you’ve got just milliseconds to make an impact.

Given the extensiveness of video gameprogramming, Carmack says, “being able to clearly keep a lot of aspects of complex system visualized is valuable. Having a good feel for time and storage that are flexible enough to work over a range of 10 orders of magnitude is valuable.”

Interestingly enough, Knuth, Norvig, Dean, Pike, Torvalds, Thompson and Hansson have all at one point said they’re believers of having a strong mental model, focus and visualization. It’s all about the ability to see the solution before diving into the problem.

The best concrete example comes from Norvig. He once was tasked to write a Sudoku-solver program. From the get go, he knew from his AI knowledge that the combination of field of constraint propagation and recursive search would solve the problem. Meanwhile, another programmer tested all sorts of code on his blog, but never really solved anything. It’s perfectly possible to write correct, tested code without correctly approaching the problem.

Herein lies the key to approaching the problem correctly. “I think it’s useful to imagine the solution, to see if it’s going to work,” said Norvig. “It’s useful to see if it feels comfortable.”

There’s Joy Inside The Black Box

In Coders at Work, Knuth expresses a major concern for the future of programming if young programmers are simply assembling parts without studying them. The neatly packaged boxes might be a good short-term solution for speed, but programmers will lack the grand visualization that’s necessary for true progress in programming. Plus, it’s just not as fun to copy/paste commands without knowing the fundamentals of why and how it’s happening.

“If the real case takes a certain time, t, then the complex case takes time 4t. But if you’re allowed to open the box, then you’ll only need three real matrix multiplications instead of four because there’s an identity that will describe the product of two complex matrices,” Knuth explains.

In fact, the very purpose of Knuth’s most famous work, The Art ofProgramming, helped programmers learn how and why data structures worked. “So my book sort of opened people’s eyes: ‘Oh my gosh, I can understand this and adapt it so I can have elements that are in two lists at once. I can change the data structure.’ It became something that could be mainstream instead of just enclosed in these packages.”

Comparing it to mathematics, it wouldn’t be fun to simply get the right theorem for the right problem each time. The joy comes from trying various theorems, visualizing what might work and getting that thrill when it finally does. Norvig shares this concern and stresses that programmers need to ask more questions before copy/pasting black boxes.

Sure, the code might have worked once, but what are the failure cases? Is it consistent? Are there other scenarios for expanding functionality? How can you understand it better? Simply relying on prepackaged boxes to carry out a command can be tedious and mundane. It’s more fun to problem-solve, think critically and learn the mechanisms of why code strung together is performing in a certain way.

Thompson is downright fearful of modern programming because it’s made up of layers upon layers upon layers. “It confuses me to read a program which you must read top-down. It says ‘do something,’ and you go find ‘something’ and it says ‘do something else’ and it goes back to the top maybe. And nothing gets done. I can’t keep it in my mind — I can’t understand it.”

While frameworks, APIs and structures might make programmers feel more productive, trueproductivity comes from unpacking the containers and truly understanding what’s inside. Only then can you build upon your knowledge and continue to push the limits of what human beings can achieve.

The greatest engineers who have brought us this far in software grew up in an entirely different era of computing — packed inside rooms filled with terminals, carrying mental models of new algorithm-based programs. Newer generations of programmers are born into a more fragmented world. The programs today are far too large to carry in our minds. Plugging in black boxes can get you to a certain point — very quickly — but the greatest programmers will always be bored with redundancy. True greatness comes with consistent drive to seek new problems with like-minded programmers, be able to see the floor before the roof and stay curious about what’s inside the black box.

 

 


Enjoyed this piece? Subscribe to our free newsletter to get new articles in your inbox. 

How Amazon Web Services Surged Out of Nowhere

Few people–if any–saw it coming. Even engineers who helped build the omnipresent Cloud that is Amazon Web Services (AWS) are surprised by its goliath success today.

“You never know how big something will be while you’re working on it, Christopher Brown,” an early AWS engineer told Business Insider.

When Amazon CEO Jeff Bezos first proposed selling infrastructure-as-a-service (IaaS), his board of directors raised an eyebrow. It’s understandable. For the general population, “Cloud-based technology platform” isn’t exactly the first thing that comes to mind when you think of Amazon. AWS is far removed from the core function as an e-commerce site going 21-years-strong. But today AWS zooms so far ahead in the market that its competitors are dwarfed into tiny specs in its rear view mirror. Even its own execs behind its growth strategy say they’re surprised at the speed of AWS’s claim to fame. Here’s a matrix depiction of the market share by Gartner:

 

gartner

For several years, AWS has been a developer’s paradise, a platform where you can build and run software without having to set up your own servers. It’s grown to be so huge that even its IaaS competitors, like DigitalOcean, depend on it. Nearly 40% of traffic runs through its infrastructure, according to an estimation by Jarrod Levitan, Chief Cloud Officer at TriNimbus.  Another more concrete statistic by Deepfield Networks found that 1/3 of Internet users visit an AWS-supported website daily. 

If AWS stopped working tomorrow, much of the Internet would dim. You wouldn’t be able to browse billion dollar social networking sites, like Pinterest. Your DropBox photos and files would be missing in action. You couldn’t stream Netflix. Forget scoring a great deal on Airbnb. Many of these billion dollar startup may have never disrupted their respective industries without AWS to propel them quickly on the scene. Plus, enterprises and even the ever-paranoid governmental agencies, like the CIA, NASA, Department of Defense and the Pentagon are now relying on AWS.

Not only is it massive, but its capabilities are astounding. Of the dozen services, arguably among the most innovative is Amazon Kinesis, a real-time processing service for live events and streaming. It can continuously capture and store terabytes of data each hour from hundreds of thousands of sources at once. It’s great, for instance, for helping advertising agencies make sense of live social chatter.

amazon

The team knew the initial concept was–at most–an interesting idea, but there are so many existing big players in the realm of data storage. How could Amazon, an online shopping site, compete? Having put his neck on the line in front of investors, Bezos must have known he was on to something…but no one could have predicted the scale to which AWS has escalated.

The Canny Rise of ‘the Infrastructure of the World’

There’s an old folklore about how AWS got its legs. The story goes that because Amazon’s technology capacity requirement naturally shoots off the charts during holiday season, it has excess capacity for the rest of the year. So, why not rent the excess storage to other companies that need it?

Amazon CTO Werner Vogels wonders: Why won’t this myth die?

It was never a matter of selling excess capacity, actually within 2 months after launch AWS would have already burned through the excess Amazon.com capacity.

One thing that the fable gets right is that AWS was created out of Amazon’s own need to support high volume of data. But each interface was created with design and intent that outsiders will eventually use it. Former Amazonian Steve Yegge recalls the infamous mandate put forth by Bezos around 2002:

“He issued a mandate that was so out there, so huge and eye-bulgingly ponderous, that it made all of his other mandates look like unsolicited peer bonuses.”

 Essentially, the mandate not only required all developers to expose data through surface interfaces over network but also built to be “externalizable,” or good enough for outsiders to use. So, you see, it was always about deliberate business and innovation from its inception, stemmed by Bezos’ singular focus on obsessively serving their customers. Amazon had been perfecting its infrastructure to meet massive needs for several years. Given its internal success. it only made sense for Bezos to take his team’s invaluable skill and profit from it.

But AWS wasn’t Bezos’ brainchild. 

The initial proposal paper (because Bezos doesn’t like slides) came from head of global infrastructure Chris Pinkham, who first ideated a potential “Infrastructure of the World.”

He worked closely with website engineering manager Benjamin Black:

Chris was always pushing me to change the infrastructure, especially driving better abstraction and uniformity, essential for efficiently scaling. He wanted an all IP network instead of the mess of VLANs Amazon had at the time, so we designed it, built it, and worked with developers so their applications would work with it.

From their own experience, they knew that maintaining servers eats up the majority of time and money. This push for universal infrastructure no doubt came from Bezos’ relentless prioritization of customers. Bezos famously asks every team to leave an empty seat in meetings for their customer, as a demonstration of just how customer-centric the company should be. In this case, the better, more comprehensive infrastructure Amazon offers, the better support its customers would enjoy. And so AWS was born into a world starved of affordable, reliable and all-encompassing IaaS.

Untitled Infographic (6)

But Why is Amazon the Chosen One?

Amazon has a historical lead in an industry so new that few people understand truly understand it. Some call it a Coke without a Pepsi competitor. AWS’s biggest IaaS challengers are Microsoft Azure, Google Compute Engine and IBM Softlayer. To supplement the Gartner matrix above, take a look at the difference in revenue last year–AWS is raking in more than its top competitors, including hybrid servicers, combined:

Screen Shot 2015-08-25 at 8.09.44 AM

Its legendary lead is a result of three core reasons. First, much of this growth is in parallel with the rise of Cloud computing itself. You can see the steady increase of Cloud popularity and interest via Google Search Trends. It starts to rise around 2007, just one year after the release of Amazon’s two most widely acclaimed services: Elastic Compute Cloud (EC2) that maximizes compute capacity and Simple Storage Service (S3), which stores massive data.

 

Screen Shot 2015-08-25 at 8.22.44 AM

 

Second, Amazon’s EC2 and S3 were the very first widely accessible virtual infrastructure services by a number of years. It’s part of cloud computing history. AWS had been working on solving a pain point that’s common to all developers, and perfected the solution long before anyone else in the industry.

Untitled Infographic (1)

Third, when the mindset started shift from apprehension about the Cloud to a necessary gravitation to the Cloud, existing players could only scramble to copy AWS services. Microsoft’s Windows Azure, for instance, started off as a Platform-as-a-Service (PaaS), but kept losing to AWS. Microsoft had to add IaaS capabilities to stay in the Cloud game, becoming a hybrid service and rebranded as “Microsoft Azure.” Then, Google followed the lead and launched its own virtual server service, marking the start of copycats.

As a result of these copycats, there’s been a significant price war to developers’ benefit. Specifically, AWS has dropped its price 49 times in 8 years. But what’s even more impressive is some scrambling competitors rely on AWS to provide IaaS to their own customer base. We mentioned DigitalOcean’s dependence on AWS earlier in this piece.

But Target.com’s story is even more telling. It had been a customer of AWS since 2001, until it decided to go its separate ways and build its own server. A little disheartened, but still amicable, AWS helped Target.com transfer its data. But after one popular promotion, the site crashed. It’s unsettling moments like this that light a fire under any infrastructure to capitalize on AWS’s offerings for additional support.

Moving forward, AWS isn’t settling to dominate the IaaS market. At the end of 2014, it announced a number of new PaaS tools to bolster developer’s paradise. This includes: Code Commit, Code Pipeline and Code Deploy. David Bernstein, CEO of Cloud Strategy Partners, sees AWS becoming any developer’s comprehensive environment.

The Cost of Supreme Excellence

After Pinkham was appointed to execute AWS, it was time to get to work. The AWS might have started with Pinkham and a handful of engineers sent to Africa, where he was based. But it’s grown to a massive organization under Amazon’s umbrella. A former EC2 engineer says there are several teams that are dedicated to each offering (e.g. S3 team).

A recent New York Times Amazon in-depth expose reveals some awful things about Amazon’s culture in general. Referring to it a “bruising workplace,” reporters cite unforgiving examples of lack of empathy by Amazonian leaders who prioritize work above all. Some Amazonians have responded with claims of inaccuracies and false depictions of isolated incidents strung together to depict a hellish culture.

Whether or not these anecdotes are truly representative of Amazon’s culture, one thing is certain: It takes herculean dedication to achieve excellence, especially to that of AWS’s caliber. This kind of market leadership doesn’t come without cost of sweat and tears. It’s always subjective to judge a company’s culture–what may be a joyous challenge to one might be relentlessly poor balance of work and life to another. Such high-performing teams can be self-selecting. After all, every single Amazonian chose to be there and most likely has InMail invitations from top tech companies waiting to be read.

Granted this is not a scientific measure, most reviews of the AWS team are glowing. Several AWS engineers can attest to the grueling workplace, but still express gratification in their role in trailblazing their industry:

I worked for EC2 in Cape Town, South Africa. It was the best!!!! I can’t imagine ever finding a working environment as cool as what I had in my team. That said, there were on-call weeks where I was cursing the company and my job at 4am, being awake for the 3rd consecutive night.  But the org is aware of problems like heavy on-call load. They started a program to improve unnecessary tickets. I found AWS (at least in Cape Town) to be a really well-run place to work.

Bezos himself is very clear that this type of revolutionary environment may not be for everyone. In one meeting, a female engineer asked what Amazon will do to better the work-life balance? His response is clear as daylight and indicative of severely high expectations:

“The reason we are here is to get stuff done, that is the top priority,” he answered bluntly. “That is the DNA of Amazon. If you can’t excel and put everything into it, this might not be the place for you,” he says, according to the book on the history of Amazon.

For better or worse, this is the culture that lead to the success of AWS today. Bezos was almost prophetic when he green-lit Pinkham and Black’s proposal to sell Amazon’s internal virtual service. By focusing his talented engineers’ energy on the infrastructure, and laying out a rigid culture of excellence, AWS is “winning” the market share for now.

Developers, what has been your experience with AWS…is AWS too big to fail or can competitors knock it off the #1 spot?

 


If you enjoyed reading this, please subscribe to our newsletter for an occasional update when we have something new to share!

Also, if you’d like 100 free AWS credits, you can enter the upcoming World Cup CodeSprint (or online hackathon) and complete one challenge.