What is your data project missing? Here are three must-have elements

The other day I was having a discussion with Inder Sidhu, who was speaking in our Engineering Leadership Professional Program.  I wanted to understand his perspective on data and strategy. As you may know, he is a great person to talk to about these things.  He joined Cisco and helped it grow from $1 billion to $50 billion. He has literally been ‘written’ into the Cisco case and he is most famous for developing their brilliant channel strategy. He is also the author of a leading book on technology strategy titled Doing Both: Capturing Today’s Profit and Driving Tomorrow’s Growth.  

Our conversation was in the context of the work we have been doing in Data-X, a program which makes it possible to rapidly learn and implement amazing data and digital projects.  I was mainly interested in his thoughts on the issues that leaders face as they implement data projects.

His initial reaction was really quite insightful. From his perspective, there are three parts to how any company or organization creates value using data, AI, or blockchain (i.e. Data-X).  

  1. The first part is really about knowing “why” and understanding the relevance to the organization.  Typically, this part is also related to developing the entrepreneurial data-story and recruiting stakeholders to a project.  It may additionally include researching the use cases from other people or organizations. This is needed to evaluate what is possible and what the solution might look like within the current organization.
  2. The second part is the “what”, and it is related to understanding the capability of the actual technologies and algorithms that might be used.
  3. The last part is about the “how” and includes the ability to implement it.  This is everything from the actual tools to the behaviors and processes that should be used. This part also includes the holistic thinking needed to combine all these elements in a way that leads to an on-time, and working implementation.  Note that this is partially a learning process, not only an execution process.


An insightful point here is that when people study, teach, or discuss data, AI, and blockchain applications today, most of the focus is on the “what” in part 2.  It feels like everyone only concentrates on the algorithms and technical capabilities. So in reality, the first and last parts are grossly underserved.

This is an important point.  For years, we have been working on the “why” as well as the “how”.  In our terminology, the “why” for organizations has been developed within Berkeley’s Method of Innovation Leadership.  This perspective is the reason that leaders from Google, Apple, Samsung, and so many more global firms attend our innovation leadership programs. On the other hand, the “how” is what we have been developing in our Data-X program.  

Through this conversation, Inder has helped reinforce to me how these frameworks are synergistic and really work together.  My experience is that while executives benefit from innovation leadership (i.e. why), they also need to understand some of the “what” and some of the “how” in data frameworks; or else they really can’t lead digital transformation projects.  

On the other hand, I’ve also experienced that while engineering or technical leaders learn powerful tools and technical understanding for developing data projects (i.e. “how” and “what”), they still need to understand some of the “why” in the form of innovation leadership and business “relevance;”  else they simply cannot find common ground with the business executives, and eventually, their projects will implode.

This cross-pollination between implementation and leadership has proven to be critical for innovation.  We have noticed that when you learn one, it is important to learn at least a little bit of the other.  And with this focused insight, you can see our perspective on “making the magic happen” in your next innovation project.

The post What is your data project missing? Here are three must-have elements appeared first on UC Berkeley Sutardja Center.

Learning Rapid Implementation of AI and Data

What’s Golf Got to Do With It?

Our ability to deliver on the promises of AI, data, and other digital transformations is key to our national agenda and global security.  Just imagine what would happen if another, less friendly nation educates more people to readily develop these technologies. It’s pretty much the same as losing the industrial revolution and becoming irrelevant, both from an economic and national security point of view.

So how do we currently teach keen people to climb the steep learning curve needed to develop these technologies?  Today, we basically teach the theoretical foundations for each subject and then hope for the best.

If we taught people to play golf the same way that we teach people to master complex topics at universities, then it would be a miracle if we actually developed good golf players.   In our hypothetical golf curriculum, we could expect that in year one, students would only be allowed to place the ball on the ground, and they would practice this for about one year.  What about hitting the ball? No, that will happen much later. In the second year, we would focus completely on the backswing. How about hitting the ball in year three? No, the third year will be focused on understanding the materials inside the ball, and maybe calculate the optimal number of dimples for best flight.  Finally, after another year of golf theory, everyone graduates. Everyone is supposedly an expert at Golf. The only problem is that no one in the class has ever played it or even hit the ball. But of course, that can be learned on the job!

In Our Model, You Start By Hitting the Ball First

This is what we are trying to fix with AI, ML, and blockchain in our Rapid AI/Data-X courses.  A student who takes a different approach to learning these subjects may learn many theories without understanding how to actually implement an AI or data science solution.    

So here is an important idea for experiential learning; Would it not be better to start by letting students just hit the ball, and then find out what they need to improve?  Some coaching along the way would also have a very positive effect.

This is the method we decided to try for AI, ML, and other data implementations.  We would teach students how to use the fundamental tools at a simple level, and then see what they can do and what they need to learn to do next.  And while coaching is part of the process, a significant part of learning is self-driven with the goal of making a challenging project work by the end of a short period of time.

We tried this rapid type of implementation in our Data-X class at Berkeley.  And even to our surprise, we found that our students actually learned to create amazing projects in just a few weeks, not 2 years! We saw projects that identified knee problems from an MRI/CAT scan, that predicted energy use from past data, and that can identify fake news. With almost 150 students in the class, we are seeing over 25 projects in a semester.

Now don’t get me wrong, this model is not at all against theory.  It is absolutely essential to focus on the theoretical frameworks at every world-class university.  However, in this case, we are working to fill an implementation gap. Theory is still absolutely necessary, but it has to be the right theory, i.e. relevant and connected to the implementation and tools.  In fact, the course we developed includes a different balance of a) relevant theory, b) understanding of tools, and c) an open-ended project that serves as a defining challenge.

Our learning lesson: It seems to work!

Let me share an example of a comment I saw from a student who took the class:

“I think this class is so awesome because it teaches the tools and concepts that are most commonly used in workplace teams that are involved with data science and applied machine learning. The vast majority of teams that I’ve applied to within the past year use the tools taught in this class. When I arrived at my data science internship this summer, I already knew how to use most of my team’s stack.”

I cannot say that this is particularly good or bad as a first result, but it frankly seems promising to me.

How We Put It All Together:

For our rapid, implementation-oriented method to work, we needed to bring innovation behaviors into the approach.  In the first 4-5 weeks of the class, we actually do not ask students to write code for their projects. Instead, we have them develop what we would consider an “insightful story” or “narrative” that describes what they intend to build.  

In real life, every successful project, innovation, and/or venture starts with a story narrative.  Typically, a story is used to test the concepts as well as develop interest from other stakeholders of a project.  By week five, students convert their story into a “low tech demo” which captures many aspects of the project’s implementation, but without code. It is a super-light prototype which can be easily modified until it’s “right” in multiple ways.  

This approach is common to design as well as entrepreneurship, and it is also good practice for students to learn this innovation behavior.

After the low-tech demo, students get the green light to go into an agile development cycle and prepare for a demonstration in eight more weeks.  Similar to the MIT Media Lab, the slogan here is not “Publish or Perish,” instead, it is “Demo or Die.”

After the low-tech demo, students get the green light to go into an agile development cycle and prepare for a demonstration in eight more weeks.  Similar to the MIT Media Lab, the slogan here is not “Publish or Perish,” instead, it is “Demo or Die.”

There is a solid stream of math theory lectures, notebooks with code samples, and explanations of open source computer science tools along the way.  These components (project, tools, and theory) when integrated in the right balance, result in a very rapid experiential learning curve.

Key concepts: Story first, low tech demo, agile sprint, coaching for innovation behavior, and the three integrated elements of theory, project, and tools.

There is much more nuance than we have been experimenting with. I can say from experience, there are many ways for engineering projects to go wrong and we are correcting for many of these issues in different ways. I’m sharing our experience with this approach because even at this early stage, it does seem to be promising.  


The post Learning Rapid Implementation of AI and Data appeared first on UC Berkeley Sutardja Center.

Our Students Can Make Data, AI, & Blockchain Projects Work in Real Life – and So Can You

A Focus on Implementation and Rapid Impact

A Signal in the Noise 

About 2 years ago, I had a thought that it was time to offer a new type of course in the areas of AI/Data, and possibly extensible to the other digital transformations like Blockchain. To create anything of significant value in this area is no small challenge at UC Berkeley because it would have to be amidst the gigantic contributions of amazing people who have already done so much in data, AI, and computing.

After all, this is the institution that developed Berkeley UNIX, introduced open source to the world, created floating point, RAID Disk storage, relational databases, and many of the most famous machine learning algorithms and tools. And the scope has ranged from the most seminal theoretical (such as the NP-completeness of some of the world’s hardest problems) to the rollout of Spark which has become the defacto method of managing big data.

Of this wide spectrum of people and capabilities, I just happen among the set that likes to focus at the more applied edge of the spectrum. Over the past 12 years, I’ve spent a great deal of effort to bring a true experiential learning component to subjects which are often considered complex to understand.  I have worked on this because, as Berkeley, we want to balance the theory with practice.  Over the years, thousands of students have taken courses from me and my Center because of this applied perspective.  Leaders everywhere hire our students because they possess all three of these important characteristics:

1) an incredible technical depth
2) a holistic understanding of the larger problem
3) the psychological behaviors needed for real-life innovation.

The same is true even for my approach to a Data Science/AI type of course.  My goal was to make it the class that I would want to take. That means a class where you actually learn the current state of the art software tools and have the ability to create real-life applications.  This would be in contrast to solving artificial or toy problems. At Berkeley, we teach the class as “Applied Data Science for Venture Applications”, and informally, we have referred to this very applied, practical framework as Data-X.

An important part of the objective is literally to add more emphasis on the implementation.  To start, if you can’t actually create or use the technology, then it’s actually a major problem, whether you are a student, a company, or you are concerned about our national agenda.  Fast forward 2 years, we are currently running one of this course for the 3rd time, and we are seeing amazing results. In only 3 months, students with only python programming and some background in probability are able to create applications that predict energy prices, detect knee problems from MRI scans, crawl the web to create new data sources, and even identify fake news.

On the other hand, if you look at technology projects in most organizations across the world, you will quickly discover that they are often challenged to deliver working implementations.  In fact, many things can go wrong. Sometimes people can’t even the connect the theory with the practice.  At a deeper level, there are also issues when people understand the theory but do not understand the software toolsets.  It is like trying to build a skyscraper with sticks and mud instead of steel beams. Modern open source tools (e.g. tensor flow, pandas, etc.) have become incredibly powerful, but if you don’t know how to use them, you are reduced to improvising, instead of using pre-existing high-quality building blocks.

There are some Computer Science curriculums in the US and around the world which have become almost all theory and very little practice.  Berkeley has never had this narrow view, but in numerous other places, students rarely build things that matter until they are at their first job.  It’s a continual battle about the role of the university. Academics in some disciplines have historically considered hands-on experiential education the role of a trade school, whereas the opposite is true in the medical field.

In this area, we need both theory and practice because the correct elements of theory serve as a map to understanding the practice, and conversely, the practical, experiential view of the subject is necessary to genuinely understand the theory.  You just can’t have one without the other. But the balance and nuance must be right.

Going one level deeper, if you look at the teaching approach in many technical curriculums, most of the focus is on the theories needed to create the next generation of powerful tools, but less focused on theories required to effectively use the currently existing tools.  That part is left to the student. Again, we are not wrong for teaching fundamental theory, this discussion is only about the approach needed to fill the gap when it comes to the practice of implementation.

Besides the gap between theory and implementation, there are actually still many other things that can go wrong in real life projects, particularly when people and organizations are considered as part of the scope:

  • Sometimes people are organized in silos, which means the holistic solution can get lost between the experts.
  • Sometimes, the problem is from overdesign, i.e. too complex, too expensive, or an approach that just takes too long.
  • Alternatively, it may not even be technically possible.
  • And in yet another form of failure, teams without the right balance of skills simply don’t know what they are doing, and then they simply fall apart mid-project.

Specifically, in the domain of Data, AI, and other digital transformations, history has shown us from the last industrial revolution that those who created the new machines (or at least learn to use them) ended up doing well.  In contrast, those that resisted or simply didn’t adapt were no longer able to stay relevant.  But to build successful new technology capabilities requires a different type of technical learning.

And yet, the student teams that we teach don’t suffer from these same problems.  This is because they are actually learning both the technical and behavioral components necessary for innovation inside a framework developed for implementation.  And this is what we have been developing within our Data-X framework for rapid impact. And amazingly enough, it seems to work.

The point here is that we have an approach that helps students make Blockchain, AI, and Data projects work in real life. It’s not mysterious, and it’s actually quite tangible. So for all the firms and organizations who have been thinking about these type of projects – you can make it work as well.

The post Our Students Can Make Data, AI, & Blockchain Projects Work in Real Life – and So Can You appeared first on UC Berkeley Sutardja Center.