Hands on Microsoft ELL
When Microsoft presented their new project ELL, the Embedded Learning Library, to the public at the end of June 2017, the media echo failed to acknowledge its groundbreaking potential. Only a few tech-focused news sites referred to the press release and the related github project. This rather underwhelming public attention is not the result of a misguided project decision at Microsoft, but the exact opposite.
In fact, it revealed how very few people really understand how Machine Learning and Internet of Things (IoT) need to merge with one another in order to make IoT a successful undertaking in the future. Current machine learning frameworks and workflows are not useful for deploying ML algorithms to embedded devices, because they are not able to create stand-alone models for different hardware architectures. Which is why Microsoft’s approach to this problem is definitely worth to look into.
What ELL currently is
In order to understand what makes ELL so fantastic, we first need to give the pictured pipeline above a closer look. The starting point in this pipeline, that we put together, is the Microsoft Cognitive Toolkit Framework (CNTK). We do not want to go into much detail about CNTK at this point, but you could roughly say that CNTK is Microsoft’s answer to Google’s well known TensorFlow. CNTK – like TensorFlow – enables you to construct and train a model as well as to export it into a file (see step 1 in flow chart above), which captures the architecture and the model parameters.This file can then be processed by the ELL library, which after that converts it to its internal model architecture. This is just necessary to export the model as a file in the ELL format (see step 2 in flow chart).
If you think that this sounds pretty complicated and redundant, then you are correct. Actually, even though ELL is meant to be a library right now, it acts more like a model compiler, which translates a CNTK model into an ELL model. This probably has three reasons:
1. The format of the CNTK models does not seem to be suitable for the ELL Compiler in the next step of the pipeline. This is most likely the case because it was never designed to be translated into a format which is translatable to an embedded device.
2. Investigating the github project, it seems as if the step of creating models in CNTK should not be necessary in the future and the ELL library itself will provide the possibility to construct and train models. But right now this capabilities are quite low, which makes sense from an engineering perspective, as it ads just comfort but no new functionality to the whole pipeline.
3. The CNTK Framework is pretty sophisticated and still asserting its position in the vast ecosystem of machine learning frameworks. It seems as if Microsoft does not want to force users, who have just managed to master CNTK, to familiarize with a new framework. From this perspective it makes sense to use ELL just as a model translator.
As ELL is currently more or less “just” a model translator for CNTK, we can use the translated model to create a file for an embedded IoT project. This is where ELL could really become a game changer: ELL, unlike other libraries, comes with an inherent cross compiler based on the LLVM compiler framework. For those who have never heard of LLVM you could briefly explain it as a construction kit for compilers. It only requires that you define how your input is translated to the LLVM intermediate language. From there on LLVM deals with the translation to other architectures.
This is the missing link in current approaches to ship machine learning models onto IoT devices! As machine learning is more or less a desktop or server sided phenomenon, so far, nobody really cared about translating the models to the various embedded architectures. With this approach the ELL compiler accepts the ELL model (see step 3 in flow chart) as input and outputs an object binary or assembler code for the specified target architecture.
This in turn is just what every embedded developer needs to integrate in an IoT project, which is typically a C/C++ project. Even if some modern embedded systems are able to run an operating system and python interpreters (e.g. the Raspberry Pi), the truly efficient real time IoT devices will still host bare metal applications written in C or C++, as they are the dominating programming languages in this field.
Finally, the whole project is cross compiled to the target architecture and deployed to the hardware (see step 4 in flow chart).
So in the end you could say that currently ELL is more or less a translator/compiler for CNTK models with the potential to leave the CNTK part out in the future. Right now, ELL can only handle a subset of the CNTK nodes and mainly focusses on convolutional neural networks. But if we take a closer look at the code, it is already clear that it is not Microsoft’s intention to stop here. In fact, they seem to have only begun.
What ELL could become
To establish the described toolchain, it currently takes several days, depending on the preferred operating system and the amount of bugs to get rid of. This sounds really complicated and at this very moment, it actually is. But one has to keep in mind that right now, this is also the most consistent way to deploy machine learning models to embedded devices and that this is only the first release of ELL. Every other approach so far basically consists of the following four steps:
1. Use your preferred machine learning library to train a model.
2. Extract the model parameters.
3. Code the prediction method of the model yourself, using the extracted parameters.
4. Integrate it in the main project.
This really slows down the development process and makes decisions about changing the model architecture potentially harmful to the success of the whole project, as the model has to be written and debugged from ground up. ELL, on the other hand, promises a consistent and automated way to deploy trained machine learning algorithms to arbitrary hardware architectures with just a few clicks.
So far the ELL approach is really appealing and with the power of a big company like Microsoft, it seems possible that the development will proceed quite rapidly. Microsoft announced breaking changes in the next months and it remains to hope that the integration of the whole pipeline will become much tighter and easier to use in the future. Nevertheless, it continues to be unclear whether or not Microsoft will keep its promise to maintain ELL as a true open source project. Right now, they seem to be a little hesitant to really get involved with the community: The patch we handed in back in August hasn’t been reviewed yet. We’ll keep you posted.