Which Universities Have the Best Coders in the World?

photo-1470378639897-89788e74b7bf (1)

With early college admissions under way for many universities around the country, we got to thinking: Which colleges have the best coders in the world?

While there are academic rankings, like the Top Computer Science Programs by US News & World report, there is no list that ranks colleges purely by their students’ ability to code. The criteria for the US News & World Report, for instance, includes number of research papers produced, global research reputation and number of conferences. In fact, practical coding skills aren’t even part of their methodology at all.

We decided to answer the question: Which universities have students who can roll up their sleeves and code?

At HackerRank, millions of developers, including hundreds of thousands of students, from around the world regularly solve coding challenges to improve their coding skills. In order to figure out which colleges have the best coders, we hosted a major University Rankings Competition. Over 5,500 students from 126 schools from around the world participated in the event. Companies also assess developers’ coding skills using HackerRank to hire great developers. 

According to our data, the top three best coders in the world hail from:

  1. Russian Federation College, ITMO University | Russia
  2. Sun Yat-sen Memorial Middle School | China
  3. Ho Chi Minh City University of Science | Vietnam

The University of California, Berkeley was the #1 college in America, and came in fourth overall.

***First, we defined what it means to be the “best” university. We thought it would be fairest to rank universities based on both number of participants and high scores. Our engineering team created a formula* to rank each university. Each university had to have at least 10 participants to place on the leaderboard.

We narrowed the data to the top 50 colleges around the world:

University_Developers_list

Two Russian universities ranked #1 and #6, respectively in the HackerRank University Competition. Meanwhile, Russian universities aren’t listed among the top 50 universities in the traditional US News & World report list. Similarly, we found that Vietnam’s Ho Chi Minh university has talented coders, but they didn’t rank high in US News & World report either.  

This is not to say that the US News & World Report is misguided. Instead, the results of the HackerRank University Competition suggest that such traditional academic rankings aren’t the only source of the best coders in the world.

In fact, one acclaimed high school in China blew many universities out of the water.  San Yat-sun Memorial Middle School (which in the US equates to high school level of education), placed 2nd, above UC Berkeley and IIT. One Chinese blog mentions that the school is actually  bigger than most universities in China, and includes a science museum.

Wentao Weng, who ranked the #13 overall, says he first started learning how to code in what he calls “Junior 1,” which is 11-years-old. Wentao told us that computer science isn’t necessarily a standalone subject in grade school, but it’s well supported:

“It’s not one of the subjects; however, we can also try to become the one of the best coders among high school students to [get admission] into a good university,” Weng says. “So our teacher supports us in [studying] computer science, and we take some time on it. “And we have done many contests both online or offline [to] learn.”

He practices roughly 4 hours per day during school, but almost the whole day on weekends. His classmates have a similar work ethic. Cai Ziyi started coding at 12 years old. He says that most student programmers join the Olympiad in Informatics (OI) as an after school hobby.

***US_Leaderboard_list 

Zeroing in on the top 25 universities in the US, eight schools cracked the top 50 overall. Many of schools listed in our competition are in line with the US News & World report, except we surfaced a few underdogs. Schools that aren’t normally seen in academic rankings, like Ohio State UC Irvine and North American University, all ranked in the top 50 worldwide in the HackerRank University Competition.

While the traditional academic rankings, like the US News & World report, are one indicator of quality of education, it’s not the only place to find great coders. Great coders can come from any university in the world. In fact, as the students at San Yat-sun prove you don’t even need a degree to be able to code well.

*** Scoring:

* To calculate the score of a school in leaderboard, we take all participants from a particular school(M) in descending order of the students’ scores and calculate using the formula below. Note: The values for α and β for this leaderboard are 0.8 and 3 respectively.

Screen Shot 2016-12-19 at 8.10.32 PM

 

 

 

 

 

In order for a school to be listed on the School Leaderboard, the school must have at least 10 students submitting code in University Competition. Students are ranked by score. If two students have the same score, the tie is broken by the time at which the user finishes the first correct submission of the last challenge solved.

 

HackerRank Partners with Zip Code Wilmington to Bring Unprecedented Federal Aid to Bootcamp Students

15032664409_8f5352b402_h-862x575

Today marks an unprecedented event for higher education nationwide. The US Department of Education is giving up to $17 million in loans and grants to select, proven nontraditional education programs to offer student federal aid.

Until now, the government only allowed federal financial aid at traditional community colleges, universities and trade schools.

In partnership with nonprofit coding bootcamp Zip Code Wilmington, and Wilmington University, is proud to be one of fewer than 10 groups selected to pilot the new Educational Quality through Innovative Partnerships (EQUIP) program. Our job will be to assess Zip Code Wilmington’s coding curriculum.

See ED.gov’s official press release here.

EQUIP will Test the Validity of Bootcamps

 This pilot initiative is one indication that the perception of bootcamps, and other non-traditional means of education, is changing.

Still, can you really trust coding bootcamps?

This question has been hotly debated in recent years. The number of bootcamp grads rose from 10,000 in 2015 to a projected 18,000 this year, yet the perception of some coding bootcamps as a means for proper training has been mixed. Skeptics are wary of for-profit unaudited bootcamps with claims of high success rates that cost tens of thousands of dollars. Still, some top employers like Google have reportedly hired bootcamp grads as of late.

“Unfortunately, traditional education is not always accessible to everyone who needs it,” said HackerRank CEO and Cofounder Vivek Ravisankar. “We’re excited to help change that through this partnership, which empowers low-income students with a new education model to improve their lives. This program helps to create opportunities for developers regardless of their backgrounds.”

The goal of this EQUIP initiative is test the fed’s hypothesis that—when paired with an independent auditor and accredited university—training bootcamps can prepare Americans with the skills they need for in-demand jobs affordably. If this pilot goes well, this could mean qualifying bootcamps nationwide could be adopted into the federal student aid system.

On Assessing Coding Bootcamp Students

Most bootcamps offer a certificate of completion.

Coding challenges, on the other hand, offer a tangible result that bootcamp grads can show to prospective employers.

 HackerRank has been the technology powering Zip Code Wilmington’s coding assessments for over a year now. Students solve HackerRank coding challenges both as part of their application to the highly selective program, and to benchmark their skills during the program.

“As these innovative programs continue to develop, it will be increasingly important to understand what an outcomes-based quality assurance system looks like for such programs,” says Under Secretary of Education Ted Mitchell. “I am encouraged to see that these colleges, providers, and quality assurance entities have stepped forward to provide models for doing so.”

new-piktochart (1)

How did ED choose which bootcamps get federal aid?

 According to the ED press release last year, the criteria for selected non-accredited training programs was: 

  • Innovative approach to helping students achieve positive outcomes
  • Equity and access, particularly for students from low-income backgrounds
  • Rigorous proposed quality assurance process
  • Affordability of the programs
  • Strong proposed student and taxpayer protections
  • Partnership with a Quality Assurance Entity (QAE) and an accredited university

Zip Code Wilmington is really unlike most coding bootcamps. First of all, it’s nonprofit. Second of all, its cost is very reasonable. The total tuition is $12,000. However, students pay $2,000 up front, and employers subsidize $10,000 upon hiring a graduate.

More on Zip Code’s selection and results:

  • In its last graduating class, Zip Code Wilmington had 250 applicants and only 25 students got in.
  • To date, Zip Code has graduated 66 students, with 60 grads in paid, full-time positions, earning over $62,000 per year
  • In fact, 16 students who were earning less than $10,000 before entering Zip Code Wilmington. Zip Code students earn, on average, less than $30,000 per year before Zip Code, and 3 months later, they’re placed in jobs earning an average over $60,000 per year.

“During our 3-month program, our students quit their jobs and work 80 to 100-hours per week,”says Melanie Augustin, Head of School at Zip Code Wilmington. “The EQUIP program will help our students focus on their studies with less financial stress, so they can increase their earning potential and make a better future for themselves and their families.”

Coders, practice coding skills here.

Managers, learn more about screening candidates accurately here.

The Immutability of Math and How Almost Everything Else Will Pass

This article was originally published on Forbes

TL;DR: Right now, there’s a cultural push to untie the historical link between advanced math and programming that could partially deter engineers from entering the field. But those who have a strong foundation in math will have the best jobs of the future. Let’s stop separating math from programming for short-term relief and, instead, focus on fundamental, unchanging truths with which we’ll engineer the future.


If you dig deep into today’s discourse on the role of mathematics in programming, you’ll find a sharp, double-edged sword.

On the one hand, people often say that because the number of app development tools are growing, you don’t necessarily need to be great at math to write software today. Amidst a widespread shortage of traditional programming talent, numerous opinion pieces, video interviews with educators and forum questions point to answers that are positioned to ease the apprehension of people exploring the field. And it’s true. Chances are, the average software engineer is not going to need Calculus while coding apps in Ruby on Rails. If you look at any given job requirement, you’d be hard pressed to find probability or number theory next to Java or C++ skills.

Since computer science is a nascent field that sprouted out of mathematic departments, there’s a cultural push to untie the historical link between advanced math and programming that could partially deter engineers from entering the field. For instance, there are literally half a dozen recent articles titled with something like: “You Don’t Have to be Good at Math to Code” (1, 2, 3, 4, 5, 6). Downplaying the importance of mathematical knowledge in software development aims to help make the field less intimidating for entry-level programmers.

But is downplaying the importance of math a sustainable message for future generations of engineers?

On the other hand, software development is quickly shapeshifting. If you discount mathematics, and in turn focus on learning transitory programming tools, you’ll be left without the skills necessary to adapt to emerging computer science concepts that have already started infiltrating engineering teams today. Without expanding mathematical knowledge, these software engineers are going to risk being left out of the most exciting, creative engineering jobs of the rapidly approaching future.

Math is a Veiled Pillar

The reality is that even though most programmers today don’t need to know advanced mathematics to be good software developers, math is still a fundamental pillar of both computer science and software development. Programming is just one tool in a computer scientist’s toolkit—a means to an end. It’s hard to draw definitive lines between disciplines, but here’s an attempt at an eagle-eye view of computer science as a field to build a bigger picture:

cs-fiel_640

At its core, computers are centered on the mathematical concept of logic. Fundamental math that you learn in high school or middle school, like linear algebra, boolean logic, graph theory, inevitably shows up in daily programming. Here are 10 examples of times when you might need mathematics in real-world programming today:

  1. Number theory. If you’re ever asked how one algorithm or data structure performs over another, you’ll need a solid grasp of number theory to make that analysis.
  2. Graphing. If you’re programming for user interface, basic geometry, like graphing, is an essential skill.
  3. Geometry. If you’re creating a mobile app and you need to create custom bounce animations that are modeled on springs, you’ll need geometry skills.
  4. Basic Algebra. If your boss asks you: How much user retention can we expect to grow next month if we increase the performance of our backend by 20%? This is a pure variable equation.
  5. Single Variable Calculus. These days FinTech firms like Jane Street are among the most sought-after companies for programmers because they pay well and have interesting challenges. You need to be able to analyze financial parameters to make crucial predictions to get these coveted jobs.
  6. Statistics. If you’re working at a startup and you need to A/B test different elements on a website, you might be tapped to understand normal distribution, confidence intervals, variation and standard deviation to see how well your code change is performing.
  7. Linear Algebra. Anytime you have image processing problems, recommendation engines (like Google’s PageRank or Netflix’s recommendation list), you need linear algebra skills.
  8. Probability. When you’re debugging or testing, you’ll need a solid understanding of probability to make randomized sequences reproducible.
  9. Big-O. If your company’s expanding to a brand new region, and you don’t understand the implications of a O(N^2) sorting algorithm, you could be pinged at odd hours because the expansion introduced holes in the algorithm.
  10. Optimization. Generally, anytime you need to make something run faster or perform better, you should be able to know how to get the minimum and maximum value of a function.

We’re far beyond the point of needing engineers to code simple solutions. Engineering teams at enterprises and—especially—startups have to earn the leading edge. They rely

on engineering and product teams to gain competitive advantage by investing in emerging concepts like Big Data manipulation, handle high-scale systems and predictive modeling. And they all require a solid framework of mathematics.

It’s not uncommon to hear refutations like: I’ve been a software engineer for 15 years and never used advanced mathematics on the job. But are we all really still going to be coding web and mobile apps 10 years from now?

Those Who Incrementally Exercise Mathematics Skills Will Get the Coolest Jobs

In the beginning of this piece, we considered why many educators and experts might be downplaying the importance of math in daily programming to encourage more engineers to enter the field. In order to meet the demand for engineering talent in the next 5 to 10 years, it’s clear that we need to take steps to encourage more peopleof diverse backgrounds to join the field. The BLS reportsthat computing and mathematics will make up more than half of the projected growth of annual STEM job openings between 2010 – 2020.

But this message of “you don’t have to be good at math to program” is actually fueling a self-destructive myth that’s baked into our culture today, which is: Math skills can’t be acquired: You’re either born with it or you’re not. This myth persists for at least two reasons:

One, Professors Miles Kimball and assistant professor Noah Smith have taught math for many years and say: “people’s belief that math ability can’t change becomes a self-fulfilling prophecy.” Consistently saying that you’re “not a math person” means you won’t be a math person.

Two, people perceive mathematical fields as dry and uncreative. It goes back to the oversimplification of the dichotomy between the “left brain” humanities and “right brain” STEM subjects. People who want to be more creative have more reasons to distance themselves from math.

A better way to attract more people to the field is by talking about the interesting, creative jobs that are taking over the future of software development.

In the next 10 years, software engineers aren’t still going to be limited to programming web and mobile apps. They’ll be working on writing mainstream computer vision and virtual reality apps, working with interesting cryptographic algorithms for security and building amazing self-learning products using machine learning. You can’t go very far in any of these fields without a solid mathematical foundation.

As the field of computer science is expanding, companies are going to be able to take advantage of more complex math to build software technology. Dr. Ann Irvine, principal data scientist at security software companyRedOwl, always looks for strong intuition on how to work with large datasets. And math happens to be inherently tied to this skill.

“It’s largely enabled by the fact that lots of modern computer algorithms, especially in machine learning, take advantage of very large data sets, so that enables the use of more complex mathematical models.” – Principal Data Scientist Ann Irvine, PhD

As it stands today, you don’t need much beyond basic algebra and geometry for software development in general. But software development of the future will be made up of highly specialized subfields of CS. Here’s a chart that illustrates just how fast these futuristic technologies are shifting toward the mainstream consumer market. The first row talks about the market opportunity in the next 4 years, the second row highlights the adoption rate and the final row is an indication of the job demand today:

Adoption_640

 

Focus on the Fundamentals Because Technology Will Pass Anyway

“The most valuable acquisitions in a scientific or technical education are the general-purpose mental tools which remain serviceable for a lifetime.” – George Forsythe, the founder of Stanford’s computer science engineering department.

It’s far more empowering to talk about the importance of skills that serve you for a lifetime rather than the demand for short-term tools today. Math is an unshakeable force in programming. The  core concept of breaking down problems, abstractions and finding solutions using formal formulas will never change.

In fact, academia is susceptible to a massive, inherent failure in being able to keep up with the ever changing tools that industries demand. Hisham H. Muhammad is a computer science PhD and illustrates the argument perfectly in this Tweet below. It’s interesting to contrast the years in which Hisham studied computer science between 1994-2000 with the years at which the technologies mentioned started taking off:

//platform.twitter.com/widgets.js

Screen Shot 2016-05-31 at 11.46.22 AM

There’s such an emphasis on branches of programming language and tools today that it’s easy to miss the bigger forrest. It’s better to start practicing now while there’s no significant pressure to apply advanced concepts to your work…yet. Even if it’s by solving one mathematical problem a day, you’ll be so much better equipped with tools to solve much more interesting problems down the line. Let’s stop separating math from programming for short-term relief and, instead, focus on fundamental, unchanging truths with which we’ll engineer the future.

Resources to Help Boost Confidence in Math:

    • Forget what you learned in school (memorizing theorems or trig identities won’t help you). Instead, learn to recognize problems and choose the right formula.
    • Read great books:

 

 

 

Blockchain and The Decentralization Of CS Education

This is the 2nd of a 2-part article in which HackerRank CEO & Cofounder Vivek Ravisankar evaluates why self-learning is the new normal of CS education. The first is on artificial intelligence here. This article was originally posted on Forbes.

It’s a brisk Sunday morning in the year 2030. You walk into your local grocery store and pick up some milk. With a wave of your hand, your smartwatch detects the translucent cryptography on the milk carton and performs a hash function. The milk is now instantly, irrefutably yours.

There’s a real possibility that, in the future, we’ll not only stop trading physical currency for things but also completely reimagine the concept of ownership. Even though the Internet has reshaped our lives in many ways, there has never been a way to truly “own” something digital without a central authority. Everything you own online, from money to your identity, requires an impartial third party mediator. It’s the only way we have to actually prove that it’s yours. If you think about it, technically, all of your online property is either leased or rented. Until recently.

Enter: Blockchain.

A blockchain is a massive, fraud-resistant distributed ledger that could be the new infrastructure of the future. The open ledger uses consensus algorithms to transparently record and verify any transactions without a third party. It replaces the middleman with mathematics. Because the blockchain infrastructure is decentralized, there’s a lot less friction and time wasted than traditional, centralized processes.

Blockchain tech is the symbol of technology outpacing services traditionally performed by archaic institutions, like the government. The efficiency of blockchain’s technology has demanded the attention from folks in every industry—from engineers to bankers to lawyers. It’s remained unhacked to date.

To many skeptics, decentralizing technology sounds like hippie, anti-authority nonsense. To others, it’s just an overhyped geek fantasy. But visionaries closest to innovations are often the best predictors of a better future. Much like the Internet was in the 90’s, the blockchain network is currently ahead of its time. Remember back in 1995 when Newsweek published: “Why the Internet Will Fail,” citing issues like “reading on a screen is a chore.” Similar skepticism for blockchains and decentralized virtual currency abounds (here, here and here).

Despite its radicalism, blockchain’s potential to eradicate much of today’s inefficiencies and insecurities in establishing ownership of assets has proven to be hard to ignore. Although it’s too soon to know exactly where it’ll take us, early adopters of the blockchain will be the biggest beneficiaries. As pioneers build a new ecosystem, demand for blockchain engineers will boom exponentially. But, chances are, you won’t learn this in school. As blockchain technology starts to permeate our society, engineers will need strong fundamentals of cryptography, distributed databases and network security, which aren’t always prioritized in computer science programs today.

Safer, Cheaper and Faster

A combination of decades’ old algorithms, the blockchain is the underlying technology that enables the virtual exchange of Bitcoin currency. It usesproof-of-work (PoW) protocols, or processing time in the form of puzzles, to verify ownership. Best of all, it’s immutable. Meaning, once a block (or transaction) is created, it can never be altered. It’s the secret sauce for security because it’s mathematically hard to be dishonest on the blockchain. But this technology is not just restricted to Bitcoin–other blockchains can support virtually any value. Here are three examples of many:

Screen Shot 2015-11-02 at 9.42.51 AM

Blockchains are complicated and nuanced, but the underlying theory is simple. Blockchains use PoW to achieve true consensus by multiple parties. To verify that something is–in fact–true, blockchains use the longest chain rule. In other words, the longest chain represents the truth. Andreas M. Antonopoulos, Bitcoin developer and evangelist, explains how PoW helps keep blockchain transactions secure:

  1. On a blockchain, miners incur financial cost (exerting energy for hash function) to secure the blockchain using PoW.
  2. If miners play by its rules, they get rewarded.
  3. Miners are incentivized financially to play by the rules.

Theoretically, it doesn’t pay to cheat. Of course, there are still some weaknesses to the theory. For instance, Ghash.io is a pool of Bitcoin miners whose hashrate has come close to achieving 51% of influence. Large mining pools are currently the biggest threat to the blockchain concept because they could potentially centralize the entire system:

 “If mining became even more centralized than it is already, Bitcoin would still function, and it might even gain mainstream adoption, but it wouldn’t really be Bitcoin. It would have the name “Bitcoin”, but it would essentially be a very inefficient form of PayPal,” says Greg Slepak, blockchain instructor at Blockchain University.

When engineers hammer down a solution to the 51% attack to keep the system decentralized, blockchain will have a really great shot at going mainstream.

Banks are also exploring blockchain technology to bolster their own services by shifting to distributed databases. Nine of the world’s largest banks are banding together to create a uniform financial ecosystem on the blockchain. This graphic created by Financial Times perfectly explains why finance professionals are crowding into conferences to make sense of this technology. Because of the distributed nature of blockchains, assets that move on a blockchain are far speedier and more economical than traditional, central ledgers.

Screen Shot 2015-10-27 at 11.28.08 AM

“It’s hard to see a world where that blockchain technology doesn’t end up changing the way we think about asset ownership,” Exchanges at Goldman Sachs podcast.  

And it’s not just finance. A diverse range of industries are aiming to unravel the blockchain and benefit from its potential:

blockchain

Are Blockchain Fundamentals Taught in CS Programs?

If blockchain radically changes our conception of ownership in the future, how long until we teach budding engineers how to build an ecosystem around blockchain technology? It’s virtually impossible for computer science programs to keep adding new technologies to its curriculums. Most would agree that truly understanding and building on the fundamentals of new technologies–like the blockchain–is better than piecing together black boxes. But even the notion of “fundamentals” seem to be expanding far too wide for brick-and-mortar universities. Curriculum-designing committees are constantly facing in the relatively new field:
Although the field of computer science continues to rapidly expand, it is not feasible to proportionately expand the size of the curriculum. As a result, CS2013 seeks to re-evaluate the essential topics in computing to make room for new topics without requiring more total instructional hours than the CS2008 guidelines – ACM & IEEE’s 2013 Joint Task Force

But blockchain technology is a convergence of several different disciplines, including distributed computing, cryptography, consensus algorithms and law…more than 20 years in the making. For the sake of simplicity, let’s focus on cryptography (the science of encryption) as a core CS fundamental that’s essential to understanding blockchain technology. In the Task Force curriculum recommendation:

Cryptography is labeled a “Core-Tier2” course, which means it’s not as essential as “Core-Tier1” courses.

Given the tight space in curriculum, chances of students taking this course as part of their core fundamental curriculum is slim.  But Ben Horowitz, cofounder of Andreessen Horowitz, says Bitcoin and blockchain may be the most important computer science breakthrough since packet switching. This is likely because of at least two reasons:

  1. The creator of Bitcoin formulated an incentive system for blockchain mining, which makes the paradigm practical. It’s a way to create trust between two competitive parties over an untrusted network. It’s a problem that’s never been solved in CS ever before.
  2. The computing power for hashing is enormous: In less than 10 minutes, they can do quadrillions of hashes. This hardware we have now wasn’t around 3 years ago.

It’s one thing to understand the fundamentals, but truly grasping the concept of blockchain requires hands-on learning. And the only way to really learn is by doing. Elite institutions like MIT and Princeton are riding at the tail of blockchain innovation by offering courses and workshops on the topic. For the rest of the 99% of the population, Blockchain technology is a tight knot we must unravel ourselves. Lively forums are filled with chatter daily for support in autodidactic learning. Technologies like blockchain will outpace traditional, centralized brick-and-mortar education. Instead, students will have to rely on distributed sources of learning online to educate themselves.

Slepak is one such software developer who saw a massive opportunity in blockchain technology when he first saw the infamous Bitcoin whitepaper in 2010. “Since then I’ve simply kept up to date using a computer and an Internet connection. That’s all you need,” he says.  He’s now an instructor at Blockchain University and a pioneer of the discipline. “Anyone who wants to be really good at something only gets there by having an internal drive to learn the subject,” he says.

Here’s a list of resources to get your hands dirty:

  1. Read the Bitcoin whitepaper from start to finish.
  2. If you’re in the Bay Area, you could join the Blockchain University.
  3. Watch this amazing talk by Andreas M. Antonopoulos and read his book.
  4. Here’s the Bitcoin Forum to join the community
  5. Learn by doing: Play with Blockchain APIs (Bitcoin, Counterparty, Ripple, and Ethereum).

In Code We Trust

The traditional notion of ownership pre-dates computer science and the Internet. Blockchain technology, in the form of decentralized or distributed databases, invites a shift in our thinking. No longer do you need the store clerk to verify that you have–in fact–purchased this carton of milk. In a sense, we can bring the familiar notion of physical property to the digital world. There’s finally a way for someone to send a piece of digital property to someone else online–safely. The combination of consensus algorithms and cryptography are a much safer, faster and cheaper way to create a universal truth about the ownership of an asset.

Blockchain experts are still testing its theories–but the potential is worth exploring. Any big disruption warrants fear, inquisition and apprehension.

But, starting with financial institutions, distributed business models could pose a big enough threat to spur competition against the status quo. We’re just on page one of this story. If transferring ownership of assets becomes a distributed business model over the next decade, the new frontier will involve trusting in code instead of trusting in an authority. New technologies are generally built on decades of research rooted in fundamentals. But it’s up to our generation of optimistic, entrepreneurial builders to self-learn and apply innovative concepts to help build our future.

Special thanks to Greg Slepak for reading a draft of this piece and sharing his insights.

Network photo by ioptio.

 


To get occasional notifications when we write blog posts, please sign up for our email list.

The New Normal of CS Education: Artificial Intelligence

This is the 1st of a 2-part article in which HackerRank CEO & Cofounder Vivek Ravisankar evaluates why self-learning is the new normal of CS education. 

If all humans have the same brain capacity—about 300 million pattern recognizers in our cortices—then what made Albert Einstein special? In his quest to replicate the human brain, renowned AI engineer Ray Kurzweil finds that a big part is: The courage to stick to your convictions. The average human is inherently conventional, reluctant to pursue ideas outside of the norm.

“[Courage] is in the neocortex, and people who fill up too much of their neocortex with concern about the approval of their peers are probably not going be the next Einstein or Steve Jobs.” – Ray Kurzweil told Wired.

If your work elicits ridicule from the rest of the world, pushing past this skepticism could be a strong indication of brilliance. Anyone who has been dedicated to the field of AI for decades knows this feeling very well.

For over 70 years, AI scientists have been periodically disillusioned by shortfalls in their field. When breakthrough theories outpace computation power, they’ve been frozen by “AI winters,” during which non-believers withheld funding and support for years. AI may be in the dark ages relative to human intelligence, but the small community of AI researchers’ persistence as outcasted believers has been key to progress.

Hollywood historically perpetuates the mythical dark depictions of man-versus-machine, but AI is turning out to be nothing like what we imagined. Intelligent machines are not armies of robots. Instead, statistical learning models, inspired by biological neural networks, allow us to silently, but magically teach machines how to learn.

With the convergence of cheaper computing, faster algorithms and ample data, artificial neural networks are resurging–and this time it’s different. Today AI professionals are among the most coveted talent, moving out of university research and into R&D labs of cutting-edge commercial companies. The application of AI, particularly in pattern recognition and image processing, is beginning to permeate daily life and will build our future. There’s a long way to go–these technologies are in their infancies. But Kurzweil and several other pioneers are certain that a future in which computers rivaling human intelligence is just a decade and a half away. AI will be the future electricity, powering everyday life:

The Future Will Run on an Artificial Brain (2)

Yet the concepts of AI are inherently unfit for the human paradigm of traditional institutions like education. It’s partly why the field took two steps backward with every leap forward:

“We could have moved a lot faster, if it weren’t for the ways of science as a human enterprise. Diversity should trump personal biases, but humans tend to discard things they don’t understand or believe in,” says Yoshua Bengio, a pioneer of modern AI and researcher at Google.

The brick-and-mortar educational paradigm can’t accommodate the fast pace of technology. This begs the question: How well are we preparing our students for the new frontier? After all, AI inherently defies conventional infrastructures. As machines grow rapidly smarter, students will shift from today’s static, instructional classrooms to a dynamic, autodidactic model of online education.

Standing on the Shoulders of AI Giants

Naysayers say they’ve heard this wolf cry before. During the Cold War, the US government heavily invested in automatic machine translation to decipher Russian documents. While machines could translate literally, they made too many mistakes in translating meaning from idioms. For instance, one Russian document said “The spirit is willing but the flesh is weak,” which translated into “the vodka is good but the meat is rotten.”

In the 1950s, there just wasn’t enough computational capacity to create a database of common knowledge. 

From the outset, it might seem as if AI researchers have spent far too much time and money with little to show for it. Dr. John McCarthy, who coined the term “artificial intelligence” in the 1950s, thought they’d be able to achieve thinking machines by the end of the 20th century…to no avail.

Even though it may have been slower than what researchers envisioned, the progress in AI is no less impressive. Researchers today are standing on the shoulders of AI researchers in the 1950s because of three core reasons, specified by Ilya Sutskever, research scientist at Google:

 

  • Exponentially more data today.
  • More computation with neural nets speed up to 30 times faster than before.
  • Knowledge of how to train these models

It’s hard to believe the Russian translation misstep when today any grade schooler could Google a flawless translation within .60 seconds.

Screen Shot 2015-10-20 at 9.38.02 AM

 

The World Will Run on Neural Networks…Sooner Than Later

There will be a huge demand for AI engineers to build infrastructures around this new generation of computer science–and it’s happening sooner than you might realize. If Tesla’s CEO Elon Musk is right, computer vision in driverless cars will be so perfect that driving will actually become illegal. Burgstaller highlights a compelling observation about disruption speed:

 

“Google as a tech company is custom to product cycles in months while traditional car companies are custom to product cycles in 7 years,” he says in a recent Goldman Sachs podcast.

Another impressive example is Facebook’s leap in improving facial recognition. At the Neural Information Processing Systems conference, CEO Mark Zuckerberg once announced that his AI team, lead by pioneer Yann LeCun, created the best face recognition technology in just 3 months. They call it DeepFace.

And, of course, we can’t forget the milestone project that kicked off AI frenzy in the media: The GoogleX lab’s brain simulation project. After 16,000 computer processors with one billion connections were exposed to 10 million random YouTube video thumbnails, it learned the image of a cat–by itself.

Untitled Infographic

The largest neural nets have about a billion connections, up from 1,000 times the size of a few years ago. Objectively, we’ve reached impressive milestones in AI through deep learning (or artificial neural networks), but we’re still worlds away from replicating the human brain:

Untitled Infographic (2)

Nonetheless, this progress stems from today’s vast computational power. Engineers can run huge, deep networks on fast GPUs with billions of connections, a dozen layers and feed it datasets of millions of examples.

“We also have a few more tricks than in the past, such as a regularization method called “drop out”, rectifying non-linearity for the units, different types of spatial pooling, etc.,” Yann LeCun, Deep Learning Expert, Director of Facebook AI Lab says.  

Best of all, this progress is collaborative. Dr. Hinton told the New York Times in 2012 that the researchers decided early on: “They want to “sort of spread it to infect everybody.”

In another Reddit AMA, his colleague at Facebook Dr. LeCun mentioned that he uses the same scientific computing framework, Torch7, for many project…. just like Google and its recently acquired subsidiary DeepMind. There’s also public versions of these technologies. Likewise, UC Berkeley’s PhD graduate Yangqing Jia made Caffe, a state-of-the-art image recognition, deep learning software, open to the public.

“At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult,” Alan Greene, chief medical officer of Scanadu, a diagnostic startup.

Learning AI Autodidactically Will be the New Normal

The attitude of “teach me something I can get a job with” is toxic to innovation. Most importantly, universities shouldn’t succumb to educating students on legacy software systems and short-lived tools, e.g. specific programming languages [like red hot Java].

“I fear that–as far as I can tell– most undergraduate degrees in computer science these days are basically Java vocational training.” Alan Kay, one of Apple’s original visionaries.

This systematically filters out brilliant students who could come in and revolutionize legacy software. Right now, the study of AI concepts alone computer science fundamentals–are not part of the core, required curriculum of universities. Even those who choose to major in computer-related fields in college will most likely have a hard time getting into an AI course. They’re usually optional, prioritizing fundamentals like data structures and algorithms.

Nonetheless, rows and columns of students in a classroom instructed to memorize facts from Powerpoint presentations is not conducive to learning this rapidly changing discipline. Kay puts it best when he says:

“They don’t question in school. What they do is learn to be quiet. They learn to guess what teachers want. They learn to memorize facts. You take a course like you get vaccinated against dreaded diseases.  If you pass it, you don’t have to take it again.”

When Rian Shams, machine intelligence researcher from Binghamton University, was drawn to AI, he never took a CS course in his life. But online courses and resources have been instrumental to Shams’ success: “While formal CS classes may teach fundamentals, and having an instructor available is certainly useful, what is more important is:

  • Deeply understanding the challenge you are facing and
  • Knowing where to get the necessary info that will allow you to tackle this challenge.”

Computation is simply a way of thinking that requires you to systematically approach and break down problems into smaller pieces. Everything else, requires hands-on, self-directed learning.  Supplementary online courses, coding challenges, open source projects and side projects are crucial to apply these fundamental, timeless concepts. After all, even students plucked from prestigious Artificial Intelligence PhD programs were drawn to the field by their own accord—defying human conventions.

___________________________________________________________________________

 

If you liked this article, please subscribe to HackerRank’s blog to receive Part II this article on blockchain technology. 

 

 

The Interdependency Of Stanford And Silicon Valley [Tech Crunch]

This article also appeared on Tech Crunch


There was a time when Stanford University was considered a second-rate engineering school. It was the early 1940s, and the Department of Defense was pressed to assemble a top-secret team to understand and attack Germany’s radar system during World War II.

The head of the U.S. scientific research, Vannevar Bush, wanted the country’s finest radio engineer, Stanford’s Frederick Terman, to lead 800 researchers on this secret mission. But instead of basing the team at Terman’s own Stanford lab — a mere attic with a leaky roof — he was sent to the acclaimed Harvard lab to run the mission.

It’s hard to imagine Stanford passed over as an innovation hub today. Stanford has outpaced some of the biggest Ivy League universities in prestige and popularity. It has obliterated the traditional mindset that eliteness is exclusive to the Ivy League. Stanford has lapped top schools by centuries. It ranks in the top 3 in multiple global and national rankings (here, hereand here).

Plus, survey results point to Stanford as the No. 1 choice of most students and parents for the last few years, over Harvard, Princeton and Yale. In fact, even Harvard students haveacknowledged Stanford’s notable rise in popularity.

elite_university

But something a little more intriguing is happening on Stanford’s campus…something that goes beyond these academic rankings. Since the beginning of time, the goal of academia has been not to create companies, but to advance knowledge for the sake of knowledge.

Yet Stanford’s engineering school has had a strong hand in building the tech boom that surrounds it today. It’s not only witnessed, but also notoriously housed, some of the most celebrated innovations in Silicon Valley.

While Stanford faculty and students have made notable achievements across disciplines, their role in shaping the epicenter of The Age of Innovation is perhaps one of the top — if not the most unique — distinguishers. As the world’s eyes fixate on the booming tech scene in Silicon Valley, Stanford’s affiliation shines brightly in the periphery.

In return, its entrepreneurial alumni offer among the most generous endowments to the university, breaking the record as the first university to add more than $1 billion in a single year. Stanford shares a relationship with Silicon Valley unlike any other university on the planet, chartering a self-perpetuating cycle of innovation.

But what’s at the root of this interdependency, and how long can it last in the rapidly shifting space of education technology?

Fred Terman, The Root Of Stanford’s Entrepreneurial Spirit

To truly understand Stanford’s role in building Silicon Valley, let’s revisit WWII and meet Terman. As the leader of the top-secret military mission, Terman was privy to the most cutting-edge, and exclusive, electronics research in his field. While the government was eager to invest more in electronics defense technology, he saw that Stanford was falling behind.

“War research which is [now] secret will be the basis of postwar industrial expansion in electronics…Stanford has a chance to achieve a position in the West somewhat analogous to that of Harvard of the East,” Terman predicted in a letter to a colleague.

After the war, he lured some of the best students and faculty to Stanford in the barren West by securing sponsored projects that helped strengthen Stanford’s reputation in electronics. Here’s a great visualization, thanks to Steve Blank, about how Stanford first fueled its entrepreneurship engine through war funds:

terman_cold_war

This focus on pushing colleagues and students to commercialize their ideas helped jumpstart engineering at Stanford. Eventually, Stanford’s reputation grew to becoming a military technology resource, right up there with Harvard and MIT.

But Terman’s advocacy of technology commercialization went beyond the military. As the Cold War began, Terman pushed to build the Stanford Industrial Park, a place reserved for private, cutting-edge tech companies to lease land. It was the first of its kind, and famously housed early tech pioneers like Lockheed, Fairchild, Xerox and General Electric.

The research park was the perfect recipe for:

  • A new revenue stream for the university
  • Bringing academic and industry minds together in one space
  • Inspiring students to start their own companies

You might say that the Stanford Industrial Park was the original networking hub for some of the brightest minds of technology, merging academia and industry, with the goal of advancing tech knowledge.

They had a harmonic relationship in which industry folks took part-time courses at Stanford. In return, these tech companies offered great job opportunities for Stanford grads.

Since then, Stanford’s bridge from the university to the tech industry has been cast-iron strong, notorious for inspiring an entrepreneurial spirit in many students. The most famous story, of course, is that of Terman and his mentees William Hewlett and David Packard, who patented an innovative audio oscillator. Terman pushed the duo to take their breakthrough commercial.

Eventually, Hewlett-Packard (HP) was born and moved into the research park as the biggest PC manufacturer in the world. To date, he and the late David Packard, together with their family foundations and company, have given more than $300 million to Stanford.

Because of their proximity to top innovations, Stanford academics had the opportunity to spot technological shifts in the industry and capitalize by inventing new research breakthroughs. For instance:

  • Computer Graphics Inc.: Students were enamored by the possibility of integrated circuit technology and VLSI capability. The Geometry Engine, the core innovation behind computer generated graphics, was developed on the Stanford campus.
  • Atheros: Atheros introduced the first wireless network. Teresa Meng built, with government funding, a low-power GPS system with extended battery life for soldiers. This led to the successful low-power wireless network, which eventually became Wi-Fi.

These are just a few of some of the most groundbreaking technological innovations sprouted from Stanford soil: Google, Sun Microsystems, Yahoo!, Cisco, Intuit … and the list goes on — to more than 40,000 companies.

Stanford also has a reputation as a go-to pool for talent. For instance, the first 100 Googlers were also Stanford students. And today, 1 in 20 Googlers hail from Stanford.

Proximity to Silicon Valley Drives its Tech Entrepreneurial Spirit

If you stroll along the 700 acres of Stanford’s Research Park, not only will you see cutting-edge companies like Tesla and Skype, but also world-renowned tech law firms and R&D labs. It’s a sprawling network of innovation in the purest sense of the term — it’s the best place to uproot a nascent idea.

Proximity to Silicon Valley is not the most important thing that distinguishes Stanford, but it’s certainly the most unique. It’s the hotbed of computer science innovators, deep-pocketed venture capital firms and angel investors.

At least today, everyone who wants to “make it” in tech is going to Silicon Valley. And — just like Terman’s early Stanford days — it’s where you can meet the right people with the right resources who can help you turn the American entrepreneurial dream into a reality.

Just look at the increasing number of H-1B visa applicants each year, most of whom work in tech. There were more than 230,000 applicants in 2015, up from 170,000 in 2014. Four out of the top 11 cities that house the most H-1B visa holders are all in Silicon Valley.

Plus, an increasing number of non-tech companies are setting up R&D shops in Silicon Valley. Analyst Brian Solis recently led a survey of more than 200 non-tech companies; 61 percent of those had a presence in Silicon Valley, which helped them “gain access and exposure to the latest technology.”

There’s certainly a robust emphasis on technology entrepreneurship penetrating the campus of Stanford engineers.

Still, opponents often point to media exaggeration that reduce Stanford into a startup-generator. Of course, Stanford’s prestigious curriculum is a draw for top faculty and research across disciplines. But, given the evidence and anecdotes, there’s certainly a robust emphasis on technology entrepreneurship penetrating the campus of Stanford engineers. How can it not?

Michael Harris, a Stanford alumnus, can attest to a general sense of drive and passion. “It’s not quite as dominant as the media makes it seem,” he said, “but there’s some element of truth.”

Stanford students are by and large interested in creating real things that have a real effect in the world. The fact that Silicon Valley is right here and students have fairly good access through friends, professors, the school, etc. to people in the industry is definitely a big bonus. It gets people excited about doing work in the tech industry and feeling motivated and empowered to start something themselves.

This Entrepreneurial Spirit Is Evolving Into A Sense Of Urgency

Terman’s early emphasis on turning the ideas developed in academia into viable products is just as — if not more — rampant today. The most telling evidence is that Stanford’s campus is producing more tech startup founders than any other campus.

But what’s even more curious is that some students, particularly in the graduate department, don’t even finish their degrees. It’s monumental to pay thousands of dollars for a master’s degree in computer science, only to leave to launch a startup. Even at the undergrad level, Harris thought about leaving college after doing one amazing internship the summer after his junior year.

“I will say that working in industry teaches you more things faster about doing good work in industry than school does by a really big margin (order of magnitude maybe),” Harris said, “so I don’t actually think it’s crazy for people not to go back to school other than the fact that some companies seem to think it’s important for someone to have a piece of paper that says they graduated from college.”

Of course, most people do finish their degrees. But this sense of urgency to leave — whether or not the majority follow through — is palpable.

Last year, six Stanford students quitschool to work at the same startup.Another 20 left for similar reasons the year before that. Apparently, Stanford’s coveted StartX program wasn’t enough for them.

StartX is an exclusive 3-month incubator program to help meet the demand for students who want to take their business ideas to the market — complete with renowned mentorship and support from faculty and industry experts to help the brightest Stanfordites turn their ideas into a reality.

In a recent talk, Stanford President John Hennessy proudly spoke about this program as a launchpad for students to scratch their itch for entrepreneurship. But when an audience member asked him about students dropping out of school, he said, “Look, for every one Instagram success, there are another 100 failed photo-sharing sites.” And, he added, “So far, all of the StartX program students have graduated — at least all of the undergrads.”

Generally, Stanford’s graduation rates have dipped somewhat in recent years. Of students who enrolled in 2009, 90 percent had graduated within 5 years, Stanford said, compared with a 5-year graduation rate of 92.2 percent 5 years earlier. And this is not a bad thing for Stanford. Since the very beginning, a core function of Stanford’s excellence is its investment in its students to build great commercial products — starting with the early days of Terman.

The Future: What Will Stanford Be Without Silicon Valley?

But both education and the Valley are shifting. The very nature of innovation frees us from brick-and-mortar walls of elite institutions and companies.

If the best application of technology is to democratize opportunity, then every single person on the planet should have affordable access to Stanford’s world-class education online. The rise of Massive Open Online Courses (MOOC) and online resources are an indication of the future of education.

It’s a future in which ambitious students have the opportunity to educate themselves. At the forefront of technology, educational institutions, including Stanford, are starting to decentralize the model through online course material.

Stanford shares a relationship with Silicon Valley unlike any other university on the planet.

Meanwhile, Silicon Valley may have pioneered the tech boom, but it’s no longer the only tech hub. Bursts of technological hubs are forming all over the world. In a piece on the H-1B visa cap, I found that the top investors in early stage startups have set up shop in India, China and Israel, three of the largest global tech hubs after Silicon Valley.

Realistically, the H-1B visa cap and city infrastructure can’t practically support exponential growth in Silicon Valley. The nucleus of innovation will eventually shift, deeming the proximity to Silicon Valley irrelevant.

Plus — as some students aren’t even finishing their degrees — it’ll be worth re-evaluating if thousands of dollars for a master’s in CS at Stanford is really worth the brand name on a resume or access to coffee with top startup founders who happen to reside in Palo Alto.

But if Stanford’s proximity to Silicon Valley drives its entrepreneurial essence, which helps bolster both the reputation and funding of Stanford, what will happen when the ambitious, startup founders at Stanford start getting their education online?

Will Stanford end up disrupting the very unique factor that distinguishes Stanford from any other university on the planet? Or will Stanford’s alumni continue to fuel its self-perpetuating cycle of innovation and maintain its reputation as an innovation hub?

The Unhealthy Obsession with Tree Questions

Why do engineers love to ask fundamental linked list and tree questions in interviews when you rarely code these problems in real-world development?

It’s evolved into a rite of passage. Every engineering candidate, from fresh-faced grads to authors of crucial open source contributions, solves fundamental data structure problems on the spot for interview screenings.

It’s how it’s always been done. But it makes sense. This ritual has sustained itself over the past few decades because it’s a fast, reliable way to spot smart candidates who can think deeply. Plus, it’s better to hire for ability to solve timeless fundamental problems than hire for knowledge based on transient tools. Hence, each of the top 10 technology companies in the Fortune 500 have asked engineering candidates core computer science concepts, including tree or list-related programming questions within the last few years:

Tree obsession

When front-end developer Stephanie Friend graduated from Cal Poly with an engineering and liberal arts hybrid degree, it had been a while since she sat in a lecture hall to learn about linked lists. It’s a good thing she blew the dust off her old data structure books and practiced challenges online before interviewing at one Silicon Valley startup in May this year:

“I had an interview with 6 different engineers on the same team, and 5 out of 6 interviewers asked me to solve a different linked list problem for a web development position,” Friend says.

So, why the need to ask 5 different linked list questions on 5 different occasions for 1 company? Some argue that you can’t be a great programmer unless you have these fundamentals down pat. Others say that CS fundamental knowledge is a good predictor of other useful programming knowledge.

It’s why most programming interview prep books, in even as early as the 2000s, have chapters dedicated solely to basic data structure and algorithm problems (e.g. 1, 2 and 3). Plus, data structure and algorithm questions make up the bulk of upvoted questions on CareerCup, a job prep community. You might be a little puzzled why we’re criticizing these questions, considering tree and linked lists challenges are some of the most popular on our own HackerRank platform.

But there’s a big flaw with companies that aren’t preparing candidates sufficiently before an interview and then relying solely on academic CS fundamentals to weed out unqualified candidates. Data structure and algorithm fundamentals are just one part of what makes a great engineer. Depending on the need, managers should also look at other crucial components, like technical experience, hard-to-acquire knowledge, design and debugging skills to comprehensively assess a candidate.

While fundamentals are crucial, using data structure questions as the be-all end-all filter for great programmers can be detrimental for talented engineers who don’t have CS degrees or who earned their CS degrees several years ago. By placing a heavy emphasis on fundamental knowledge–without properly preparing candidates–companies can create a bias toward recent CS graduates. As a solution, interviewers need to empower candidates with preparation material to reduce the number of great programmers who are rejected.

The Real World vs. Programming Interviews

A typical programmer, even at a top tech company, would rarely implement a data structure like a binary tree from scratch. So, many devs might be out of practice with this by their next interview. The most famous recent example is Max Howell, the author of HomeBrew, a celebrated  program management system for Macs. This year, Howell applied for an engineering position at Google and was rejected because, as he claims, he couldn’t “invert a binary tree” during the initial interview.

//platform.twitter.com/widgets.js

While we can’t definitively say why Howell didn’t get the job (it could have been a number of factors that interviewers don’t reveal), it’s likely that he could have performed better if he had known to brush up on those fundamentals by practicing online. After all, he’s an extremely accomplished and smart engineer. So, there’s a chance he was a classic false negative candidate.

The obsession that top tech companies have with data structure problems can also be unfair to engineers who never sat in a CS class a day in their lives. If you don’t have a CS degree, it can be difficult to gauge how much you need to know to clear the initial bar during interviews.

“While these questions can help select talented developers, they become highly problematic when somebody doesn’t have proper tools to prepare and doesn’t understand what is expected. A candidate might feel they need to read an entire algorithms book, which wastes their time and results in less time actually practicing problems.” says Gayle Laakmann McDowell, tech hiring consultant and author of Cracking the Coding Interview.

Although many great candidates get rejected because they failed to adequately prepare, it’s actually not that hard for smart developers to learn or re-learn the fundamentals. It’s part of why companies are fine with requiring them. Today there are a host of online resources to help you practice data structure questions. McDowell, who’s passionate about teaching programming, once successfully taught a student the required basics in just 2 hours. Another one of McDowell’s students was a self-taught programmer with a degree in music and learned and practiced enough of the fundamentals to land a job at Facebook.

But not everyone’s fortunate enough to have McDowell coach them individually. Self-taught students and experienced programmers are left to fend for themselves, leaving many of them annoyed and confused about the purpose of such fundamentals at dream company interviews, like Google and Apple (complaints evident here, here and here).

The best companies also test for other important factors that make a great engineer. For instance, discussing a technical project that a candidate is proud of can reveal knowledge, passion and ability to communicate well. Again, fundamentals can be very easy to learn if you know how much to prepare.

The Root of the Obsession

Since the initial boom in software engineering back in the 1980s, data structure and algorithm questions have been the common way to test candidates. The earliest engineers with growing teams carried CS degrees, and they knew that algorithm classes were a great place that required deep thinking. So, engineers of the 80s created this interview process that resembles algorithm classes. And it works accurately enough. When searching for talent, these questions are fast enough to answer in less than an hour and help interviewers gauge a programmer’s intelligence. It’s certainly not the only way to filter candidates for smartness, but–again–it works well enough.

McDowell theorizes that there might be another reason why companies expect knowledge of data structures like linked lists and trees: It’s hard to find enough algorithm questions that don’t involve these.

“Companies test algorithmic problem solving skills because they believe that people who are smart will generally do good work; they’ll find good solutions, write good code, and so on. I suspect companies continue to expect knowledge of data structures like linked lists and trees (which developers rarely directly use) because it’s hard to find enough algorithm problems that don’t cover this knowledge. And, since enough people have CS degrees, and it’s easy enough for those who don’t to learn this material, it creates a pattern where it’s okay to expect that knowledge,” McDowell says.

Companies deem this system effective because successfully answering algorithm questions are a positive indicator of success on the job. As McDowell says, it means they’re smart and they’re likely to do better work.

However, companies that don’t prepare candidates well enough aren’t giving them a chance to perform well at these fundamental questions. Most companies recognize that some good candidates will be rejected through these questions, but they’re okay with the drawback of missing out on good candidates. They figure, it’s better to reject a good candidate than hire a bad one. The veteran engineer Joel Spolsky, and author of the Trello software, once penned this common hiring philosophy in detail back in 2004:

“It is much, much better to reject a good candidate than to accept a bad candidate. A bad candidate will cost a lot of money and effort and waste other people’s time fixing all their bugs. Firing someone you hired by mistake can take months and be nightmarishly difficult, especially if they decide to be litigious about it. In some situations it may be completely impossible to fire anyone. Bad employees demoralize the good employees,” Spolsky says.

There might be some validity in the cost per bad hire in Spolsky’s outlook, but rejecting too many good candidates will dramatically increase the cost and time to hire — and ultimately, restrict company growth. All companies should be concerned with this.

Interview Prep is Not ‘Cheating’ and Necessary to Fill Empty Positions

About 10 years ago or so, companies were somewhat more wary about giving candidates preparation material before an interview. It was sometimes considered taboo or even “cheating” because they worried candidates might memorize problems and regurgitate knowledge in the interview.

But that mindset has slowly started to shift as the shortage of talented developers has intensified. In a 2013 survey of over 1,500 senior IT and business executives, more than a third identified availability of talent, employee turnover and labor prices as a business concern.

“Jobs postings will be listed for months without finding a good candidate,” former Zynga software engineer and founder of Appurify, Rahul Jain told TechCrunch.

Given these concerns, it’s actually in a company’s best interest to help candidates with interview prep by giving candidates a chance to practice solving CS fundamental problems. It’s simple: Better prepared candidates lead to fewer false negatives. Plus, most engineers can easily distinguish between someone who’s just memorized answers and someone who can truly solve a hard problem.

The best tech companies realize that it’s actually beneficial to both engineers and companies to give candidates a fair opportunity to put their best foot forward. This is especially true given the obsession with fundamentals that most engineers don’t usually revisit since the good old college days. It’s also a good way to skip the anxiety-ridden phase of the interview and get to other meatier questions that are just as important in assessing candidates, like culture fit and collaborative skills.

Googler Steve Yegge is one engineer who realized candidate preparation is an effective solution to the talent shortage early on. Back in 2008, he “secretly” blogged engineering interview tips for Google candidates in hopes that more of his interviewees would succeed:

Time passes, and interview candidates come and go, and we always wind up saying: ‘Gosh, we sure wish that obviously smart person had prepared a little better for his or her interviews. Is there any way we can help future candidates out with some tips?’

Google doesn’t know I’m publishing these tips. It’s just between you and me, OK? Don’t tell them I prepped you. Just go kick ass on your interviews and we’ll be square….

As late as 2008, people were so against offering candidate prep that Yegge even considered publishing his tips under a pseudonym to avoid upsetting people. Ultimately, his desire and need for better prepared candidates outweighed the risk of earning negative sentiments. This is also a huge reason why McDowell also left Google to start her empire of interview prep about seven years ago. As a software engineer at Apple, Google and Microsoft, McDowell interviewed one too many ill-prepared but smart candidates. She wanted to teach and help more engineers become better interviewers; thus, CareerCup was born.

The best tech companies are starting to realize that the more preparation, the better interview-to-hire rate. For instance, today Facebook hires McDowell for a weekly 1.5 hour class for candidates exclusively for interview prep. She walks Facebook candidates through the problems and even offers tips along the way. Facebook found so much success in this recruiting strategy that they doubled the frequency of her class. Today select top-tiered tech companies, like Pinterest, Google, Airbnb and Twitter, send at least an email that points candidates to resources for better preparation and practice for fundamental CS challenges.

More companies should look at why they’re rejecting great candidates and how they can reduce false negatives to help grow their team more successfully. Empowering smart candidates by setting more realistic expectations for candidates about the interview process is one way to accomplish this.

This decades-old process of testing engineers’ intelligence through fundamental CS questions may be sufficient to identify great programmers. But this process should come with a mechanism (at the minimum an email with links to resources) to help candidates practice these fundamental challenges for the interview. By helping candidates prepare, companies can more easily identify great developers and reduce the bias against older and nontraditional candidates. They can also focus on other important components that are crucial in evaluating strong engineers. This ultimately reduces hiring costs and fuels company growth — a win for the company and the candidate.

 

Do you prep your candidates before quizzing them on trees and linked list questions? 

Why Many Computer Science Programs Are Stagnating

If you think about it, computer science (CS) has had–at best–a rocky relationship with education.

Let’s rewind for a minute. Born at the merging of algorithm theory, math logic and the invention of the stored-program electronic computer in the 1940s, it wasn’t truly a standalone academic discipline in universities until at least 20 years later.

Initially, most people thought the study of computers was a purely technical job for select industries, like the department of defense or aerospace, rather than a widespread academic science. The proliferation of computers in the 1990s pushed universities to create a standard computer science department to teach students the fundamentals of computing, like algorithms and computational thinking.

Fast forward to today, the average computer science department is still handing out a routine syllabi with lectures, books and lab assignments about the theories of writing programs. Sure, there have been a few updates here and there in the short history of CS fundamentally, but–with the exception of the elite or small CS programs–the educational structure is always lagging behind the sheer pace of advancements in the tech industry. Here’s why:

There’s No Feedback Loop Between Industry & Universities

When Ashu Desai, founder of Make School, was studying computer science as an undergrad at UCLA just a few years ago, he would routinely skip classes to work on building bluetooth accessories for iPhones in his dorm room.

“Nothing I learned at UCLA helped me build my startup,” Desai says. “I had reached out to various CS and EE professors for help, and while they were enthusiastic about my work, they were unable to help me with the project.

One professor even suggested, rather than working alongside experts in a lab, he should work outside of the UCLA lab to avoid risk of losing product ownership.

It’s ironic, really. Some of the most sophisticated technological breakthroughs have happened in university research labs, but the undergraduates down the hall are stuck learning the same concepts as peers that came 10 years before them.

Plus, while most CS professors are highly intellectual and deeply knowledgeable about computer science, they lack industry knowhow. It’s purely circumstantial, considering the academic career path doesn’t normally involve industry experience. There’s one exception to this rule. Elite universities have a major advantage because they have more resources to pay industry experts more than the average university.

We did a little experiment to justify this hunch. After comparing 20 random faculty members at Carnegie Mellon University, a top computer science program, with that of 20 random faculty members at a lesser-known University of Houston, we found a pretty significant difference:

CS syllabus

While this is by no means an exhaustive account, it’s a good anecdotal indication of the elite advantage. The majority of computer science programs have a substantially large gap between university and industry demands, trends and technologies.

We’d be remiss not to recognize the efforts of those who are actively working to bridge this gap. The Joint Task Force on Computing Curricula Association for Computing Machinery (ACM) IEEE Computer Society has historically been critical to shaping the CS curriculum as the pioneers of the discipline.

And the group does try to involve some industry professionals in creating their curriculum recommendation. Unfortunately, it’s not always a rosy picture. For instance, one of the biggest, repeated concerns industry folks mention in 2013 is the lack of security and parallel and distributed systems as a core part of student preparation for the real-world.

Indeed, feedback during the CS2008 review had also indicated the importance of these two areas, but the CS2008 steering committee had felt that creating new KAs was beyond their purview and deferred the development of those areas to the next full curricular report. (Pg 13)

The Joint Task Force’s attempt to updating the CS Syllabus is noble and commendable. But the pure nature of higher education puts too many limits on what they can accomplish.

All of these findings beg the larger question: Without an effective feedback loop between industry and brick-and-mortar universities, how well are we preparing our CS undergrads for the industry world with our current syllabus?

Brick & Mortar University Infrastructure isn’t Built to Support the Pace of Tech

If university is the wise but sluggish grandparent, computer science is the restless, two-year old tot. Universities can simply never catch up to the rapid speed of software technology.

The reason is twofold. First of all, Intel’s cofounder Gordon Moore’s famous Moore Law, which is a 50-year-old observation turned prediction, says that the speed of computing power will double every 12-18 months. Moore’s prediction has been right so far, and technology, in general, is still evolving. Looking further down the line, the latest revelation and potential of quantum computing sparks an entirely new commercialization of innovation on the horizon that will impact the industry significantly. How can universities logistically keep up?

To add new teachings, universities must subtract. The Joint Task Force’s recommendations were updated every decade until 2008, when they decided to increase it to every 5 years. If you look at their CS2001 recommendation, it’s very clear that its curriculum is forced to focus on breadth.

“Over the last decade, computer science has expanded to such an extent that it is no longer possible simply to add new topics without taking others CC2001 Computer Science volume – 13 – Final Report (December 15, 2001) away….It is important to recognize that this core does not constitute a complete undergraduate curriculum, but must be supplemented by additional courses that may vary by institution, degree program, or individual student.”

 

That last line is crucial and stands true for any curriculum today. Since educators can’t just keep adding new technologies to their syllabi, universities with limited resources stick to the unchanging fundamentals as a requirement. It makes sense. Universities are inherently more theory-based versus practical application. And, theoretically, you should be able to pick up new technologies and tools if you have the fundamentals down. While these are good points, the problem arises when fundamental theories don’t translate seamlessly to the industry. Yes, it’s undoubtedly true that fundamentals are important, but the only way to truly grasp the fundamentals is to apply it to real-world projects and scenarios.

There were so many times I was scratching my head in college to the point where I gave up after a while because I couldn’t visualize where to start,” says Anubhav Saggi, a software engineer who majored in computer science at UCLA. “If you can’t internalize the fundamentals [through real-world projects] in a way that makes sense to you, then you won’t be able to really understand or appreciate why current tech works the way it does.

In other words, simply listening to a lecture on algorithms is okay. But actually carrying out real-world programming tasks using the most current technology is better. Often times, that’s up to students to be proactive and do it themselves.

It’s Hard to Support the Demand of CS Majors As Well

How can universities focus on updating their curriculum if they don’t have enough professors to support the surge of CS majors?

Screen Shot 2015-06-02 at 8.47.06 AM

Source: Tech Crunch

A report from the University of Washington found an astounding increase of freshmen CS majors by 300% over the last 4 years at the University of Washington. Again, some elite programs are fortunate enough to have supporters with deep pockets, like Harvard University, which recently announced expansion thanks to Steve Ballmer’s generous donation of $60 million. No big deal.

The larger majority suffers from this problem: They need more funding for tools and resources to make classes more hands-on and applicable to today’s technologies to support the spike in enrollment. Otherwise, students who learn by doing versus listening, like Saggi and Desai, suffer the consequences of a reduced quality of education with larger classrooms, more lectures and less resources for hands-on CS learning. Or, worse:

We’re turning away many students we’d love to have,” Ed Lazowska, the Bill & Melinda Gates Chair in Computer Science & Engineering at the UW told GeekWire. “That’s the tragedy.”

But Learning Can Be Fun, Hands-On and Flexible

So, where does that leave students today? The most successful software engineers usually spend some time in the real world to get the hang of things. There’s a lot of StackOverflow-ing and general Google-ing involved. Without a structured way to visualize and apply teachings to current, evolving technologies, it’s a self-teaching study at the moment.

This is also one strong explanation for the recent rise in Massive Open Online Courses (MOOCs). Since traditional brick-and-mortar universities can’t support the spike, nonprofits, like Stanford and Harvard’s EdX, reached 1.25 million students. Both professors and students can look to online resources to test their classroom knowledge. For instance, Tom Murphy, computer science teacher at Contra Costa College says:

“I consider problem solving to be one of the most import skills to foster in computer science students, usually accomplished via challenging coding problems.”

Still, nothing can replace hands-on experience of applying knowledge to real-world problems. Students who feel ill-prepared must be proactive in getting hands-on experience, whether it’s by getting an internship, contributing to open source or practicing real-world challenges online, like security programming referenced above.

The evolving nature of computer science can’t be confined to brick-and-mortar university lecture halls. But adopting technological tools to make hands-on training easier and supplement evergreen fundamentals taught at universities is crucial to better prepare CS grads for the tech industry.

 

Lead image: washington.edu

 

War, Passion & the Origin of Computer Societies

Every computer scientist knows the Association of Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). With over 160,000 members collectively worldwide, ACM and the IEEE Computer Society are the largest catalyst for bringing together the most enthusiastic, determined and intelligent minds devoted to advancing computing technology.

But how did such computer science organizations emerge? Tracing the origins of today’s largest computing organizations reveals a fascinating story of passion for a new trade at the culmination of WWII.

How it All Began

It was 1946, and pressure was high in advancing technology in the face of warfare. A team of scientists at the Moore School of Electrical Engineering in Pennsylvania introduced the world to the very first powerful, multipurpose digital computer: the ENIAC (Electronic Numerical Integrator And Computer). It was originally designed for the army to calculate artillery firing tables with 1,000 times more power and speed than traditional machines.

As early as the 1930s and even after the war ended in 1945, the national department of defense depended on mathematicians, engineers and scientists to keep improving technology for not only weaponry but also logistics, communications and intelligence in labs across the nation.

As a result of the war, the demand for more mathematicians, statisticians and engineers to iterate on such computing devices spiked dramatically. Look at the spike in demand for mathematicians and statisticians between 1938 and 1954:

Mathematicians Demand Growth (1)Because of the covert wartime operations, many of the inventions and advancements remained behind closed lab doors. It wasn’t until February of 1946 that the ENIAC was introduced to the world in the press, often referred to as the “Giant Brain.” Intrigued by automatic computing, researchers saw the potential value of computers for other other areas as well.  For scientists, this was a massively powerful machine with immeasurable potential of computing. Just think, unlike any other existing machine,  it could solve 5,000 addition problems in 1 second.

There was so much more to explore, understand and scientifically test. It signified the birth of a brand new field. It quickly became an exciting topic of imagination and discussion for industrialists across the nation.

The Origin of IEEE’s Early Computer Societies

It was in ENIAC’s same birth year and city where the first computing committee of IEEE began: The Computing Device Committee (CDC). At this time, the IEEE was still split as two, rival societies: American Institute of Electronic Engineers (AIEE) and Institute of Radio Engineers (IRE). Both formed their own committees dedicated to understanding the new field of computing.

For instance, one mission of the IRE’s new technical committee on electronic computers was to standardize the glossary of the emerging field of computer science. It sounds mundane to us now, but someone had to come up with uniform names for brand new concepts. One hot debate was what to call the increased speed of switching circuits.  It was almost going to be called “Babbage,” most likely after Charles Babbage, a father of the computer. Ultimately, they voted on the term “nanosecond.”

The founding members of these committees were some of the most forward-looking minds behind early computing inventions:

The interest in computing grew swiftly and in 1951, IRE decided to establish a paid membership based group (like ACM): Professional Group of Electronic Computers (PGEC). It grew from about 1,100 paid members in 1954 to over 8,800 paid members at the end of the decade. Eventually, the different computing committees joined forces to create one giant Computer Group and later Computer Society.

Screen Shot 2015-05-27 at 10.02.10 AMThe Origin of ACM

As the ENIAC sparked an uptick in gatherings to discuss digital computing, one pivotal convention was the Symposium on Large-Scale Digital Calculating Machinery in January 1947. Over 300 technical experts from universities, industry and government met at Harvard University to watch technical paper presentations and a demonstration of the Mark I Calculator.

It was at this symposium where computer pioneer Samuel H. Caldwell first expressed a need for a dedicated association solely for people interested in computing machinery. Sure, there were computing committees as arms of larger related organizations (e.g. AIEE’s CDC), but there needed to be a better way for interested computing experts to exchange ideas, publish official journals and tackle challenges across these organizations.

By summertime, there was modest support around the idea and a “Notice on Organization of the Eastern Association for Computing Machinery” was sent to anyone who might be interested in computers. Just like the founding members of the first computing committee at AIEE, ACM’s founding council were also accomplished computing pioneers:

  • R.V.D. Campbell worked on the Harvard Mark I-IV.
  • John Mauchly co-designed the first general purpose computer and first commercial computer.
  • T. K. Sharpless contributed to the design of the high-speed multiplier.

On September 15, 1947, about 48 people met at Columbia University and formally voted on starting the association and elected a board. At the first meeting, TK Sharpless talked about the Pilot model of the Edvac, a stored program computer. In the following meeting that same year, they covered 13 technical papers in one meeting! And, this time, over 300 people joined in.

Because interest was catching on in the community, by 1948, they decided to drop the “eastern” in the name and expand the association. Both the membership and the value of the association grow pretty rapidly early on.  Membership just about doubled between 1949 and 1951. Even though prices increased to support expansion from $1 annually in 1947 to $10 annually in 1961, more people kept joining. In fact, some notable founding members, like Sharpless and Concordia, belonged to both ACM and IEEE’s Computer Society.

Growth in Numbers of ACM (1)

line

A Passionate Pursuit by Forward Thinkers: Edmund Berkeley & Charles Concordia  

You’d think the biggest champions of the ACM & IEEE Computer Society would be the leaders who invented the first electronic automatic computers, like the ENIAC or Atanasoff–Berry Computer, right? Well, you’d be wrong.

Although many of the early fathers of automatic computing machinery played integral roles as presiders and council members of ACM and IEEE Computer Society, the early champions and heavy lifters of both computing societies weren’t early industry or government inventors of the modern automatic computing machine. They were admirers and researchers, who passionately believed in the significance of these computing advancements.

Dr. Charles Concordia Led the AIEE’s CDC

charles_concordiaAt the time of AIEE’s inception of its first computer committee, Dr. Charles Concordia was a prominent electrical engineer. He was an early computer user rather than an inventor. His work in electrical engineering at General Electric laboratory frequently required the use of the differential analyser (an analog computer), which was housed at the Moore School of Engineering.

Here, he was exposed to a lot of the new electronic computing devices, including the ENIAC, and saw something with incredible potential. As an active member of the AIEE, he knew there needed to be a more concerted effort in understanding and advancing the future of computing. And so, without any background or experience in building early computers, he boldly presided as the chairman of the CDC and pulled other computer pioneers, like John Grist Brainerd, who famously worked on the ENIAC project, to form the first computing committee in 1946.

It’s interesting that someone who specialized in detecting cracks in railroads, designing generators and advising on a pump hydro storage project would lead a committee entirely dedicated to exploring automatic computing, like the ENIAC. Computer science was too new for him to definitively know what impact computing would have on his field of electrical engineering.

Edmund Berkeley: The Man Behind ACM

edmundEdmund Berkeley is cited by multiple people (here and here) as the sole person who originated the ACM. While Berkeley was an expert in early computers, first by working on the Mark II during WWII, and then by working on the computerization of Prudential Insurance Company, he wasn’t an early modern computer inventor at the time. Rather, he was a passionate writer, editor and publisher of computing as it relates to society and education. Later on, he created an educational toy, Simon, that taught people more about coding.

He diligently worked to connect with interested parties across groups from different regions and laboriously did all of the secretary work that no one else wanted to do for 6 years without pay. Berkeley manually mimeographed documents to members as the founding secretary.

What propelled him to work so hard in creating ACM as the sole driver? Berkeley was highly vocal about computing as a means to understand fundamental problems of the world. He wanted to advance technology so that it could touch everyone’s lives positively. This required a free flow of information…something that the war prevented thus far and this association helped facilitate further.

“I read somewhere that the Soviets thought people ought to be taught about computers based on what 20-30 experts have to say. That’s stupid. What ought to be taught about computers is a result of looking at the world and seeing what needs to be taught about computers….I think what the ACM should concentrate on is making a list of the nine most important problems in the world. And then if they have the time left over, publish junk that only 50 people can understand.”

 

The Lasting Legacy of Passion in Computer Science for the Greater Good

During a time when computer science wasn’t even an accepted discipline, the creation of ACM and IEEE’s early computer groups offered a haven of bountiful access to exciting resources, ideas and inspiration from people at the forefront of this brand new science.

Created by passionate believers and eventually led by pioneers of the early computing history, these organizations were responsible for turning the mysterious, complex and wartime computer mainframes into an educational discipline.

Early on, ACM and IEEE’s Computer Society’s primary activity was to arrange national meetings and publish journals that helped connect the world with leading experts who helped cement computer science as an educational discipline. Until these associations were formally created, there was no easy way to reach academics or researchers across the nation who are working on solving similar problems or to even learn more about computer science.

The tradition has lived on today as software engineers, academics and students from all over the world still convene at ACM and IEEE to challenge themselves in solving the world’s toughest problems. Both have committees that help shape today’s computer science education, research and innovative advancements of software computer technology of the future.

 

 

Why Don’t More CS PhD Breakthroughs Turn into Companies?

Silicon Valley is acclaimed as an innovation hub in the public eye, but dramatic technological breakthroughs occur in the halls of university research labs all the time. PhDs have years of deep knowledge in narrow domains that often produce discoveries that propel tech advancements even further.

Academics fathered some of the most important innovations that lead to the commercialization of computing. Before mathematician Alan Turing published his theory of the ‘Universal Turing Machine’ at Cambridge University in 1936, there was no concept of storing a program in a computer.

PhD

The Universal Turing Machine is the blueprint on which the modern computer is based today. At a time when different machines were dedicated to accomplishing different tasks, the idea of a universal computer that carried out any task on one machine was remarkable.

But it was only hypothetical and remained as such until the first prototype “Pilot ACE” (pictured above) was created in 1950 by other researchers who took over Turing’s ideas. Even still, it wasn’t commercially sold until the company English Electric created a version of the Pilot ACE and sold it as the DUECE in 1955.

Sure, there are some exceptions of research ideas that prospered into successful business ventures. The most famous example of this is Google by computer science PhDs Larry Page and Sergey Brin. Before Google, web pages were ranked in order of websites with the highest number of key terms. Page applied the concept of building authority through citations (or backlinks) and theorized a smarter system of organization dubbed “BackRub.”

Google Original Homepage
Source: Pingdom

He and Brin created an algorithm that rewards links that come from credible sources. Backed by support from their colleagues at Stanford, they launched the first version of Google on the university website: http://www.google.stanford.edu. At one point, the duo even crashed the university’s Internet connection!

“We’re lucky there were a lot of forward-looking people at Stanford,” Page tells WIRED. “They didn’t hassle us too much about the resources we were using.”

They received modest offers for the technology from existing companies in the space, like Yahoo!, Infoseek, Lycos and AltaVista. Encouraged by their colleagues, they turned them down and took it upon themselves to take the plunge as entrepreneurs. By and large, however, most breakthroughs are very slow to materialize from the lab into the industry. Today thousands of research labs around the world are producing new techniques and even products that could potentially be valuable to the industry and economy. But many PhD papers remain, well, papers.

Consider a recent example at MIT, one of the world’s most prestigious computer science doctorate departments. In 2010, a research team designed an intelligent wheelchair with voice-commandable robotics. It uses machine learning to automatically learn the layout of any given room and carries out command by voice. It could help thousands of disabled people who suffer brain injuries who can still talk. Five years later, there’s still no sign of any such product to hit the market officially yet.

Here’s another smart wheelchair conceptualized at Worcester Polytechnic Institute, reported in 2014. This one allows you to control the wheelchair’s motor by raising your eyebrows. It lives in the university’s robotics workshop for now, although the inventor Taskir Padir says he’s aiming to push out a commercial version of the navigation system in the next few years. Why aren’t we seeing such technological breakthroughs hitting the market more frequently today?

Few Universities Offer Entrepreneurial Support  

Elite universities, like MIT, Stanford and Harvard, may have a strong network and support system to help their researchers progress select breakthroughs into the market. But most universities leave it up to PhDs to seek out investors, business-savvy leaders and all other moving pieces that are necessary to launch a successful business.

Stony Brook University Distinguished Teaching Professor Steven Skiena is one academic who successfully turned his expertise in Big Data sentiment analysis into a company. He says, it’s all about who you know.

The key event in founding General Sentiment was when I met an experienced entrepreneur (Mark Fasciano) with the experience and inclination to make the venture happen,” he says.

Building a team and getting the right investors on board requires time that most PhDs just don’t have. “If universities provided support and encouraged not just to  patent inventions, but actually take them to market, we would see more startups,” says Yevgen Borodin, research assistant professor at Stony Brook University.

Untitled Infographic

It takes the right leader to launch and turn it into something economically productive. More investors and companies should work to bring research ideas to life. Rishabh Jain, MIT PhD, offers up 3 examples of Venture Capitalist firms that fund university breakthroughs, according to Quora:

Some Papers Aren’t Practical or Satisfy a Market Need

There’s a stark difference in objectives between businesses and research facilities: “Most companies are not really based on ideas, but instead, recognizing and satisfying under-served market needs,” Dr. Skiena says. “They are different beasts: both important, but different.”

Timing is a critical factor here. Publishing new ideas is generally a slower process compared to the startup world. Without investors with an eagle eye on revenue and profits, scientists have the luxury of working on ideas for the sake of knowledge. This freedom often results in breakthroughs that are important to technology long-term rather than filling an immediate market need.

Achievements in newest domains, like artificial intelligence, are a culmination of several different ideas over several years. Take computer vision researchers at MIT, for instance, who recently figured out an algorithm that removes the reflection that appears when you take a photo through a window. While that’s really cool, the discovery is only effective on double-paned windows. With such limitations in place, the breakthrough alone may not permeate the photography lens market anytime soon, but it’ll be pivotal for other robotics creators in improving robot vision of the future.

On the other hand, Dr. Borodin creatively turned his product into something more immediately practical for the wider public. His research aimed to help blind people interact with computers. Determined not to let his research be confined to ideas on paper, he founded Charmtech Labs LCC with a few other colleagues and marketed the product himself.

“What helped us succeed in the market was the decision not to develop technology for blind people alone, but, instead, make it universally accessible by everyone,” he says. Now, his product Capti Narrator is marketed as a productivity tool for anyone to turn documents or web pages into audio.

More PhDs Should Consider Pushing Breakthroughs to the Market

Even after PhDs spend years devoted to unique tech discoveries, some often have a hard time getting jobs that are directly related to their research. The highest rank of a PhD is getting tenure, but statistically there are 10 PhD graduates in computer science for every 1 tenure position.

The most recent survey by the Computer Research Association finds that in 2014 the number of doctoral degrees produced declined by 2.6% from 1,991 to 1,940. Here’s a graph describing the trends in PhD production over the past few years. As you can see, the number of PhDs have generally stalled, compared to its initial spike between 2003 and 2007, when the number of PhDs just about doubled:

Screen Shot 2015-05-20 at 8.11.34 AM

Taulbee Survey finds that 57.5% of PhD graduates went on to work in the industry in 2014, up from 55.5% in 2013.

Granted, in academia, one key objective of discovery research is purely for the advancement of knowledge. So, some researchers have no intention of creating breakthroughs for commercialization.

But academics who are considering leaving academia should start thinking broadly about how to apply their breakthroughs in the industry, whether by launching their own startup or accommodating for a larger market. Likewise, universities should provide a better springboard for connecting researchers with the right resources to move their ideas forward. There’s a lot of room for improvement when it comes to reducing the lag time between PhD breakthroughs and commercial products that could help accelerate tech advancements for all.

What other reasons have you found in explaining why more PhD breakthroughs don’t turn into companies?