Skip to content
Stream HackerRank AI Day, featuring new innovations and industry thought leaders. Watch now
HackerRank Launches Two New Products: SkillUp and Engage Read now
The 2024 Developer Skills Report is here! Read now
Artificial Intelligence

The New Normal of CS Education: Artificial Intelligence

Written By Vivek Ravisankar | October 20, 2015
This is the 1st of a 2-part article in which HackerRank CEO & Cofounder Vivek Ravisankar evaluates why self-learning is the new normal of CS education. 

If all humans have the same brain capacity—about 300 million pattern recognizers in our cortices—then what made Albert Einstein special? In his quest to replicate the human brain, renowned AI engineer Ray Kurzweil finds that a big part is: The courage to stick to your convictions. The average human is inherently conventional, reluctant to pursue ideas outside of the norm.

“[Courage] is in the neocortex, and people who fill up too much of their neocortex with concern about the approval of their peers are probably not going be the next Einstein or Steve Jobs.” – Ray Kurzweil told Wired.

If your work elicits ridicule from the rest of the world, pushing past this skepticism could be a strong indication of brilliance. Anyone who has been dedicated to the field of AI for decades knows this feeling very well.
For over 70 years, AI scientists have been periodically disillusioned by shortfalls in their field. When breakthrough theories outpace computation power, they’ve been frozen by “AI winters,” during which non-believers withheld funding and support for years. AI may be in the dark ages relative to human intelligence, but the small community of AI researchers’ persistence as outcasted believers has been key to progress.
Hollywood historically perpetuates the mythical dark depictions of man-versus-machine, but AI is turning out to be nothing like what we imagined. Intelligent machines are not armies of robots. Instead, statistical learning models, inspired by biological neural networks, allow us to silently, but magically teach machines how to learn.
With the convergence of cheaper computing, faster algorithms and ample data, artificial neural networks are resurging–and this time it’s different. Today AI professionals are among the most coveted talent, moving out of university research and into R&D labs of cutting-edge commercial companies. The application of AI, particularly in pattern recognition and image processing, is beginning to permeate daily life and will build our future. There’s a long way to go–these technologies are in their infancies. But Kurzweil and several other pioneers are certain that a future in which computers rivaling human intelligence is just a decade and a half away. AI will be the future electricity, powering everyday life:
The Future Will Run on an Artificial Brain (2)
Yet the concepts of AI are inherently unfit for the human paradigm of traditional institutions like education. It’s partly why the field took two steps backward with every leap forward:

“We could have moved a lot faster, if it weren’t for the ways of science as a human enterprise. Diversity should trump personal biases, but humans tend to discard things they don’t understand or believe in,” says Yoshua Bengio, a pioneer of modern AI and researcher at Google.

The brick-and-mortar educational paradigm can’t accommodate the fast pace of technology. This begs the question: How well are we preparing our students for the new frontier? After all, AI inherently defies conventional infrastructures. As machines grow rapidly smarter, students will shift from today’s static, instructional classrooms to a dynamic, autodidactic model of online education.

Standing on the Shoulders of AI Giants

Naysayers say they’ve heard this wolf cry before. During the Cold War, the US government heavily invested in automatic machine translation to decipher Russian documents. While machines could translate literally, they made too many mistakes in translating meaning from idioms. For instance, one Russian document said “The spirit is willing but the flesh is weak,” which translated into “the vodka is good but the meat is rotten.”
In the 1950s, there just wasn’t enough computational capacity to create a database of common knowledge. 
From the outset, it might seem as if AI researchers have spent far too much time and money with little to show for it. Dr. John McCarthy, who coined the term “artificial intelligence” in the 1950s, thought they’d be able to achieve thinking machines by the end of the 20th century…to no avail.
Even though it may have been slower than what researchers envisioned, the progress in AI is no less impressive. Researchers today are standing on the shoulders of AI researchers in the 1950s because of three core reasons, specified by Ilya Sutskever, research scientist at Google:

  • Exponentially more data today.
  • More computation with neural nets speed up to 30 times faster than before.
  • Knowledge of how to train these models

It’s hard to believe the Russian translation misstep when today any grade schooler could Google a flawless translation within .60 seconds.
Screen Shot 2015-10-20 at 9.38.02 AM

The World Will Run on Neural Networks…Sooner Than Later

There will be a huge demand for AI engineers to build infrastructures around this new generation of computer science–and it’s happening sooner than you might realize. If Tesla’s CEO Elon Musk is right, computer vision in driverless cars will be so perfect that driving will actually become illegal. Burgstaller highlights a compelling observation about disruption speed:

“Google as a tech company is custom to product cycles in months while traditional car companies are custom to product cycles in 7 years,” he says in a recent Goldman Sachs podcast.

Another impressive example is Facebook’s leap in improving facial recognition. At the Neural Information Processing Systems conference, CEO Mark Zuckerberg once announced that his AI team, lead by pioneer Yann LeCun, created the best face recognition technology in just 3 months. They call it DeepFace.
And, of course, we can’t forget the milestone project that kicked off AI frenzy in the media: The GoogleX lab’s brain simulation project. After 16,000 computer processors with one billion connections were exposed to 10 million random YouTube video thumbnails, it learned the image of a cat–by itself.
Untitled Infographic
The largest neural nets have about a billion connections, up from 1,000 times the size of a few years ago. Objectively, we’ve reached impressive milestones in AI through deep learning (or artificial neural networks), but we’re still worlds away from replicating the human brain:
Untitled Infographic (2)
Nonetheless, this progress stems from today’s vast computational power. Engineers can run huge, deep networks on fast GPUs with billions of connections, a dozen layers and feed it datasets of millions of examples.

“We also have a few more tricks than in the past, such as a regularization method called “drop out”, rectifying non-linearity for the units, different types of spatial pooling, etc.,” Yann LeCun, Deep Learning Expert, Director of Facebook AI Lab says.  

Best of all, this progress is collaborative. Dr. Hinton told the New York Times in 2012 that the researchers decided early on: “They want to “sort of spread it to infect everybody.”
In another Reddit AMA, his colleague at Facebook Dr. LeCun mentioned that he uses the same scientific computing framework, Torch7, for many project…. just like Google and its recently acquired subsidiary DeepMind. There’s also public versions of these technologies. Likewise, UC Berkeley’s PhD graduate Yangqing Jia made Caffe, a state-of-the-art image recognition, deep learning software, open to the public.

“At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult,” Alan Greene, chief medical officer of Scanadu, a diagnostic startup.

Learning AI Autodidactically Will be the New Normal

The attitude of “teach me something I can get a job with” is toxic to innovation. Most importantly, universities shouldn’t succumb to educating students on legacy software systems and short-lived tools, e.g. specific programming languages [like red hot Java].
“I fear that–as far as I can tell– most undergraduate degrees in computer science these days are basically Java vocational training.” Alan Kay, one of Apple’s original visionaries.
This systematically filters out brilliant students who could come in and revolutionize legacy software. Right now, the study of AI concepts alone computer science fundamentals–are not part of the core, required curriculum of universities. Even those who choose to major in computer-related fields in college will most likely have a hard time getting into an AI course. They’re usually optional, prioritizing fundamentals like data structures and algorithms.
Nonetheless, rows and columns of students in a classroom instructed to memorize facts from Powerpoint presentations is not conducive to learning this rapidly changing discipline. Kay puts it best when he says:
“They don’t question in school. What they do is learn to be quiet. They learn to guess what teachers want. They learn to memorize facts. You take a course like you get vaccinated against dreaded diseases.  If you pass it, you don’t have to take it again.”
When Rian Shams, machine intelligence researcher from Binghamton University, was drawn to AI, he never took a CS course in his life. But online courses and resources have been instrumental to Shams’ success: “While formal CS classes may teach fundamentals, and having an instructor available is certainly useful, what is more important is:

  • Deeply understanding the challenge you are facing and
  • Knowing where to get the necessary info that will allow you to tackle this challenge.”

Computation is simply a way of thinking that requires you to systematically approach and break down problems into smaller pieces. Everything else, requires hands-on, self-directed learning.  Supplementary online courses, coding challenges, open source projects and side projects are crucial to apply these fundamental, timeless concepts. After all, even students plucked from prestigious Artificial Intelligence PhD programs were drawn to the field by their own accord—defying human conventions.
___________________________________________________________________________
 
If you liked this article, please subscribe to HackerRank’s blog to receive Part II this article on blockchain technology. 

Abstract, futuristic image generated by AI

Top 10 AI Skills to Upskill Your Workforce in 2023