Taking Micro Machine Learning to the MAX78000 – Electronic Design

What youll learn

I tend to do only a few hands-on articles a year, so I look for cutting-edge platforms that developers will want to check out. Maxim Integrateds MAX78000 evaluation kit fits in this bucket. The MAX78000 is essentially an Arm Cortex-M4F microcontroller with a lot of hardware around it, including a convolutional-neural-network (CNN) accelerator designed by Maxim (Fig. 1). This machine-learning (ML) support allows the chip to handle chores like identifying voice keywords or even faces in camera images in real time without busting the power budget.

1.The MAX78000 includes a Cortex-M4F and RISC-V cores as well as a CNN accelerator.

The chip also includes a RISC-V core that caught my eye. However, the development tools are so new that the RISC-V support is still in the works as the Cortex-M4F is the main processor. Even the CNN support is just out of the beta stage, but that's where this article will concentrate on.

The MAX78000 has the usual microcontroller peripheral complement, including a range of serial ports, timers, and parallel serial interfaces like I2S. It even has a parallel camera interface. Among the analog peripherals is an 8-channel, 10-bit sigma-delta ADC. There are four comparators as well.

The chip has large 512-kB flash memory along with 128 kB of SRAM and a boot ROM that allows more complex boot procedures such as secure boot support. There's on-chip key storage as well as CRC and AES hardware support. We will get into the CNN support a little later. The Github-based documentation covers some of the features I outline here in step-by-step detail.

The development tools are free and based on Eclipse, which is the basis for other platforms like Texas Instruments' Code Composer Studio and Silicon Labs Simplicity Studio. Maxim doesn't do a lot of customization, but there's enough to facilitate using hardware like the MAX78000 while making it easy to utilize third party plug-ins and tools, which can be quite handy when dealing with cloud or IoT development environments. The default installation includes examples and tutorials that enable easy testing of the CNN hardware and other peripherals.

The MAX78000 development board features two LCD displays. The larger, 3.5-in TFT touch-enabled display is for the processor, while the second, smaller display provides power-management information. The chip doesn't have a display controller built in, so it uses a serial interface to work with the larger display. The power-tracking support is sophisticated, but I won't delve into that now.

There's a 16-MB QSPI flash chip that can be handy for storing image data. In addition, a USB bridge to the flash chip allows for faster and easier downloads.

The board also adds some useful devices like a digital microphone, a 3D accelerometer, and 3D gyro. Several buttons and LEDs round out the peripherals.

There are a couple JTAG headers; the RISC-V core has its own. As noted, I didnt play with the RISC-V core this time around as it's not required for using the CNN supportalthough it could. Right now, the Maxim tools generate C code for the Cortex-M4F to set up the CNN hardware. The CNN hardware is designed to handle a single model, but it's possible to swap in new models quickly.

As with most ML hardware, the underlying hardware tends to be hidden from most programmers, providing more of a black-box operation where you set up the box and feed it data with results coming out the other end. This works well if the models are available; it's a matter of training them with different information or using trained models. The challenge comes when developing and training new models, which is something I'll avoid discussing here.

I did try out two of the models provided by Maxim, including a Keyword Spotting and a Face Identification (FaceID) application. The Keyword Spotting app is essentially the speech-recognition system that can be used to listen for a keyword to start off a cloud-based discussion, which is how most Alexa-based voice systems work since the cloud handles everything after recognizing a keyword.

On the other hand, being able to recognize a number of different keywords makes it possible to build a voice-based command system, such as those used in many car navigation systems. As usual, the Cortex-M4F handles the input and does a bit of munging to provide suitable inputs to the CNN accelerator (Fig. 2). The detected class output specifies which keyword is recognized, if any. The application can then utilize this information.

2. The Cortex-M4F handles the initial audio input stream prior to handing off the information to the CNN accelerator.

The FaceID system highlights the camera support of the MAX78000 (Fig. 3). This could be used to recognize a face or identify a particular part moving by on an assembly line. The sample application can operate using canned inputs, as shown in the figure, or from the camera.

3. The FaceID application highlights the CNNs ability to process images in real time.

Using the defaults is as easy as compiling and programming the chip. Maxim provides all of the sample code and procedures. These can be modified somewhat, but retraining a model is a more involved exercisethough one that Maxims documentation does cover. These examples provide an outline of what's needed to be done as well as what needs to be changed to customize the solution.

Changing the model and application to something like a motor vibration-monitoring system will be a significant job requiring a new model, but one that the chip is likely able to handle. It will require much more machine learning and CNN support, so it's not something that should be taken lightly.

The toolset supports models from platforms like TensorFlow and PyTorch (Fig. 4). This is useful because training isn't handled by the chip, but rather done on platforms like a PC or cloud servers. Likewise, the models can be refined and tested on higher-end hardware to verify the models, which can then be pruned to fit on the MAX78000.

4. PyTorch is just one of the frameworks handled by the MAX78000. Training isn't done on the micro. Maxims tools convert the models to code that drives the CNN hardware.

At this point, the CNN accelerator documentation is a bit sparse, as is the RISC-V support. Maxims CNN model compiler kicks out C code that drops in nicely to the Eclipse IDE. Debugging the regular application code is on par with other cross-development systems where remote debugging via JTAG is the norm.

Maxim also provides the MAX78000FTHR, the little brother of the evaluation kit (Fig. 5), This doesn't have the display or other peripheral hardware, but most I/O is exposed. The board alone is only $25. The chip is priced around $15 in small quantities. The Github-based documentation provides more details.

5. The evaluation kit has a little brother, the MAX78000FTHR.

The MAX78000 was fun to work with. It's a great platform for supporting ML applications on the edge. However, be aware that while it's a very low power solution, it's not the same thing as even a low-end Nvidia Jetson Nano. It will be interesting to check out the power-tracking support since power utilization and requirements will likely be key factors in many MAX78000 applications, especially battery-based solutions.

Original post:
Taking Micro Machine Learning to the MAX78000 - Electronic Design

Related Posts

Comments are closed.