Semiconductor Companies Have Been Key Enablers of the AI Revolution….

Five years ago, there was no integrated circuit (IC) product category called “Deep Learning Processors”.  Training and inference of large Artificial Intelligence (AI) models was primarily done by high-end servers and Graphics Processing Units (GPU).

What a change can five years make.

Presently there are no fewer than 20 major semiconductor fabless/IDM companies that offer AI chips.  They range from giants such as AMD, Nvidia, Intel, Qualcomm, to smaller pure-play vendors such as Graphcore, Cerebras, Groq, Gyrfalcon, Hailo among many others.  Building AI inference and training chips has not been exclusive to chip vendors.  Large consumers of chips such as Alphabet (Google), Amazon, and Tesla have opted to build their own optimized Systems-On-Chips (SoCs) to fulfill their internal consumption.

Not surprisingly, AI processors come in a variety of forms and shapes.  Some are destined for datacenters while others are optimized for edge applications or smart speakers or even tiny IoT devices.  Some cater to large training jobs, while others shine in inferencing. Vendors building chips for dense core applications (e.g., Cerebras, nVidia, and Grahcore) sell systems optimized for their processors and are no longer considered to be semiconductor companies.

Semiconductors: Key Enablers of AI

Artificial Intelligence is not a new technology and has been around for many decades.  It has gone through periods of ebb and flow.  The recent reemergence of AI can directly be attributed to the following factors:

  1. Availability of inexpensive and distributed computational resources
  2. Availability of inexpensive memory and data storage
  3. Oodles of data, (thanks to digitization) ideal for training large models
  4. Advent of advanced Machine Learning models such as “Convolutional Neural Networks (CNNs)”, “Transformers”, and “Generative Adversarial Networks”
  5. Large funding both from public and private sectors

Semiconductor technology can be held accountable for no less than three of these factors.  Make no mistake, in absence of cheap, distributed, and elastic compute and storage; AI would have been “Dead on Arrival”.  Imagine trying to pre-train large models such as GPT-3 (widely used in Natural Language Processing) having 175B (“B” as Billions) floating point parameters using a singular server.  It took A span of hundreds of well-coordinated GPUs weeks to train such a monstrosity.

To be fair, AI has been generous to Semiconductor industry as well.  It clearly is one of the reasons behind the recent “Renaissance” in semiconductor market.  AI deserves credit for being the creator of new market segments, new companies, new product category leading to a larger “pie” for all to enjoy.

The Great Irony

Despite the pivotal role played by chip technology leading to the rise in AI, chip companies are yet to adopt AI in a big way to improve their own internal operational efficiencies.  It is not hard to reach this conclusion.  Just pick a major chip vendor and launch a parametric search for a certain device.  Are you presented recommendations for similar products or companion products or products that other customers have considered?  How about a list of part numbers for companion discretes offered by partner companies and an easy way to access them?  How about a list of resources (tutorials, videos, papers, blog posts) that can reduce the design risk?  Does the site remember your preferences attempting to save you time in your next visits?  Do you get the feeling that you get a “customized special” treatment?  How about a replacement solution for a product from competition accompanied highlighting own benefits?  How about access to a large knowledgebase via a multi-lingual intelligent chatbot?  How about getting critical updates on the products of interest?

I know, painting a group with a wide brush is suboptimal and some would disagree with the assertions above, but we can all agree that chip vendors are definitely not on the forefront when it comes to “handholding” of their target audience (design engineers). You will be hard pressed reaching the landing page of an eCommerce company without hearing the annoying popping sound of a chatbot.  Admittedly, a great many of them are useless but there are some that are excellent.

Where Can We Go from Here?

There are literally dozens and dozens of applications that have been touched by Machine Learning, but I find the entries in Figure 1 to be a decent representation of most. Having had senior management roles in a number of semiconductor companies for nearly two decades, I have been exposed to many opportunities and challenges that are unique to this segment.  While I can’t claim that all AI-centric tools could have improved outcomes, I can identify some that could have come very handy during my tenure.

Allow me to present the following list.  The ability to provide intense technical support to design engineers is definitely on top of my list.  This is especially true in complex multifaceted designs involving large complex parts (when software and drivers are at play). The ability to generate better demand forecast holds the second position on my list.  Last but not least, the ability to detect unusual, out of ordinary events (such as unexplainable variations in the pattern of bookings, delivery schedule, cancellations, push backs and pull ins) can assist vendors to quickly react to shifting winds.  The latter is particularly important when it comes to detecting unusual behavior demonstrated by the competition.  Below I have attempted to address ways that AI can make a dent in these three areas.

I. Personalization / Tailored Help (Handholding)

Marketing research company Gartner deserves the credit for the term “Tailored Help”, and it means exactly what it says.  In essence, it is entirely possible (using Machine Learning) to learn (over time) the preference of each site visitor and adjust the list of products and services presented to them accordingly.  Customers don’t want to waste time and often appreciate being directed to the area of their interest.  The major sticking point in personalization is preserving customer’s privacy while helping them accomplish their business objectives.  Gartner sums up the desires of customers as follows:

  1. Direct me to the right place
  2. Save me time
  3. Teach me something new and relevant
  4. Give me better ideas to use this product
  5. Help me through purchase process
  6. Help me sort out large quantities of information

While entries above are generic in nature, most of them apply to the semiconductor industry as well.  Assisting design engineers pick the right parts and do all it takes to help them to complete the project successfully can go a long way in improving the overall customer experience.  This is equally true when it comes to the process of sampling and procurement.  Fortunately, AI has few things to offer that can help.  Let us start with “Natural Language Processing” (NLP).  NLP is the largest and arguably the most complex (in my humble opinion) subsegment in Artificial Intelligence, but it has produced the most impressive outcomes.  NLP can handle a wide variety of tasks such as translation, sentiment analysis, language generation and understanding.  NLP is the technology behind Amazon’s Alexa and other smart speakers.  I can see a variation of NLP play a key role in part selection especially when it comes to language understanding and translation in chatbots.  All major cloud computing vendors.

The second tool in our AI toolset is the “Recommender Engine” and that too is widely used by all eCommerce sites.  I see no reason why this should not work in the context of recommending a certain chip and/or content (relevant papers, app. notes, and articles) to aid the engineers.  Last but not least, there is the aspect of “Personalization”.  This capability too is applicable in semiconductors and is a well understood and a widely used AI technology.  In summary, it is not terribly difficult to build an automated product discovery/recommendation engine able to understand natural language in various languages.  This contraption can be created by bolting three well-understood AI Models.  This is by no means trivial and is a serious undertaking, but it is entirely possible and most definitely utilitarian. 

I. Demand Forecasting

Managing inventory is a critical task for any IC manufacturer.  The duration from “wafer starts” to getting finished tested parts can take four months or more. This complicates inventory management immensely.  Overbuilding inflates company’s inventories which ties up capital and leads to an opportunity loss while adversely impacting the company’s balance sheet.  Not building enough, on the other hand; can be disastrous as well and will invariably lead to missing shipments. Nothing is more effective than missing product deliveries when it comes to losing customers.  This should explain the critical importance of having accurate demand forecasts that can accompany and confirm the forecast coming from the field.

Parameters such as booking, billing and the likes are time variant, time series forecasting techniques have been used for forecasting product demand.  Several classical forecasting models such as “Autoregressive Integrated Moving Average” (ARIMA) have been used for forecasting chip demand with mixed results.  In recent years however, an ensemble of AI-based Deep Learning Models such as LSTM (Long Short-Term Memory) combined with ARIMA has produced remarkable results.

The challenge at hand is to forecast the ingredients that sum up to form the final forecast. As an example, demand for a product can be impacted by the following cycles:

  1. Seasonal Cycle (Pre-Holiday build. See Fig. 4)
  2. Product Lifecycle (product introduction, rise, maturation, and sunset. See Fig. 2)
  3. Economic Cycles (economic contraction vs. expansion. See Fig.3)
  4. Noise (Loss of major customer or an unexpected calamity.  See Fig.5)

Various factors impact items 1, 2, and 3 and dealing with them separately and adding the outcomes will produce far superior forecasts. 

III. Anomaly Detection & Competitive Monitoring

This ability is simply having the tools that can help detect unusual events in the overall pipeline that starts with the product build and ends with the product delivery.  This can be monitoring key parameters such as bookings, billings, cancellations, product pull ins, product pushouts, noticeable changes in the number of RFQs among many other parameters.  The idea is to be alerted of unexplained changes and react quickly to adapt to such changes. 

This is yet another area that AI can help.  Unsupervised Machine Learning models have done a phenomenal job in detecting events that are out of ordinary (detecting anomalies).  Financial institutions use anomaly detection widely to combat fraud.  You can thank this capability next time you receive a call from your bank alerting you of an unusual transaction.

Aside from monitoring company’s internal metrics, anomaly detection combined with advanced web analytics can help vendors keep a close tab on their competitors and receive alerts when a notable change in behavior is detected.  Out of ordinary changes in the following parameters can be quite telling:

  1. Press mentions
  2. Web traffic
  3. User engagement, page views, bounce rates
  4. Upstream and downstream sites

Consideration for Adopting AI

Being skeptical about investing in AI is not unusual.  Admittedly the amount of hype in the domain is astonishing and the efficacy of AI is overinflated at times.  Aside from the hype, there are plenty of success stories that have generated generous dividends for adopters. 

If adoption of AI is a consideration in your semiconductor setting, I recommend seeking answers to the following three simple questions:

  1. Is your organization in possession of sizable historical data?
  2. Do you suspect there are repeating trends in this data?
  3. Would the ability to predict key metrics and outliers add value to your organization?

If “Yes” is the answer to these questions, you can be assured that the initial investment in exploring AI is most definitely well-justified (and frankly the investment amount is not that much).  While the development of a full-blown Machine Learning pipeline is neither easy nor inexpensive, building a limited prototype can be inexpensive, quick, and telling.

We are blessed with having tools that can build decent models in few weeks.  The process of adopting AI can be done in steps.  It turns out that all major cloud computing vendors (AWS, GCP, Azure, and IBM) offer proven models and platforms that can be readily used to implement very effective Machine Learning pipelines rapidly and cost effectively.