Effects of the Alice Preemption Test on Machine Learning Algorithms – IPWatchdog.com

According to the approach embraced by McRO and BASCOM, while machine learning algorithms bringing a slight improvement can pass the eligibility test, algorithms paving the way for a whole new technology can be excluded from the benefits of patent protection simply because there are no alternatives.

In the past decade or so, humanity has gone through drastic changes as Artificial intelligence (AI) technologies such as recommendation systems and voice assistants have seeped into every facet of our lives. Whereas the number of patent applications for AI inventions skyrocketed, almost a third of these applications are rejected by the U.S. Patent and Trademark Office (USPTO) and the majority of these rejections are due to the claimed invention being ineligible subject matter.

The inventive concept may be attributed to different components of machine learning technologies, such as using a new algorithm, feeding more data, or using a new hardware component. However, this article will exclusively focus on the inventions achieved by Machine Learning (M.L.) algorithms and the effect of the preemption test adopted by U.S. courts on the patent-eligibility of such algorithms.

Since the Alice decision, the U.S. courts have adopted different views related to the role of the preemption test in eligibility analysis. While some courts have ruled that lack of preemption of abstract ideas does not make an invention patent-eligible [Ariosa Diagnostics Inc. v. Sequenom Inc.], others have not referred to it at all in their patent eligibility analysis. [Enfish LLC v. Microsoft Corp., 822 F.3d 1327]

Contrary to those examples, recent cases from Federal Courts have used the preemption test as the primary guidance to decide patent eligibility.

In McRO, the Federal Circuit ruled that the algorithms in the patent application prevent pre-emption of all processes for achieving automated lip-synchronization of 3-D characters. The court based this conclusion on the evidence of availability of an alternative set of rules to achieve the automation process other than the patented method. It held that the patent was directed to a specific structure to automate the synchronization and did not preempt the use of all of the rules for this method given that different sets of rules to achieve the same automated synchronization could be implemented by others.

Similarly, The Court in BASCOM ruled that the claims were patent eligible because they recited a specific, discrete implementation of the abstract idea of filtering contentand they do not preempt all possible ways to implement the image-filtering technology.

The analysis of the McRO and BASCOM cases reveals two important principles for the preemption analysis:

Machine learning can be defined as a mechanism which searches for patterns and which feeds intelligence into a machine so that it can learn from its own experience without explicit programming. Although the common belief is that data is the most important component in machine learning technologies, machine learning algorithms are equally important to proper functioning of these technologies and their importance cannot be understated.

Therefore, inventive concepts enabled by new algorithms can be vital to the effective functioning of machine learning systemsenabling new capabilities, making systems faster or more energy efficient are examples of this. These inventions are likely to be the subject of patent applications. However, the preemption test adopted by courts in the above-mentioned cases may lead to certain types of machine learning algorithms being held ineligible subject matter. Below are some possible scenarios.

The first situation relates to new capabilities enabled by M.L. algorithms. When a new machine learning algorithm adds a new capability or enables the implementation of a process, such as image recognition, for the first time, preemption concerns will likely arise. If the patented algorithm is indispensable for the implementation of that technology, it may be held ineligible based on the McRO case. This is because there are no other alternative means to use this technology and others would be prevented from using this basic tool for further development.

For example, a M.L. algorithm which enabled the lane detection capability in driverless cars may be a standard/must-use algorithm in the implementation of driverless cars that the court may deem patent ineligible for having preemptive effects. This algorithm clearly equips the computer vision technology with a new capability, namely, the capability to detect boundaries of road lanes. Implementation of this new feature on driverless cars would not pass the Alice test because a car is a generic tool, like a computer, and even limiting it to a specific application may not be sufficient because it will preempt all uses in this field.

Should the guidance of McRO and BASCOM be followed, algorithms that add new capabilities and features may be excluded from patent protection simply because there are no other available alternatives to these algorithms to implement the new capabilities. These algorithms use may be so indispensable for the implementation of that technology that they are deemed to create preemptive effects.

Secondly, M.L. algorithms which are revolutionary may also face eligibility challenges.

The history of how deep neural networks have developed will be explained to demonstrate how highly-innovative algorithms may be stripped of patent protection because of the preemption test embraced by McRO and subsequent case law.

Deep Belief Networks (DBNs) is a type of Artificial Neural Networks (ANNs). The ANNs were trained with a back-propagation algorithm, which adjusts weights by propagating the outputerror backwardsthrough the network However, the problem with the ANNs was that as the depth was increased by adding more layers, the error vanished to zero and this severely affected the overall performance, resulting in less accuracy.

From the early 2000s, there has been a resurgence in the field of ANNs owing to two major developments: increased processing power and more efficient training algorithms which made trainingdeep architecturesfeasible. The ground-breaking algorithm which enabled the further development of ANNs in general and DBNs in particular was Hintons greedy training algorithm.

Thanks to this new algorithm, DBNs has been applicable to solve a variety of problems that were the roadblock before the use of new technologies, such as image processing,natural language processing, automatic speech recognition, andfeature extractionand reduction.

As can be seen, the Hiltons fast learning algorithm revolutionized the field of machine learning because it made the learning easier and, as a result, technologies such as image processing and speech recognition have gone mainstream.

If patented and challenged at court, Hiltons algorithm would likely be invalidated considering previous case law. In McRO, the court reasoned that the algorithm at issue should not be invalidated because the use of a set of rules within the algorithm is not a must and other methods can be developed and used. Hiltons algorithm will inevitably preempt some AI developers from engaging with further development of DBNs technologies because this algorithm is a base algorithm, which made the DBNs plausible to implement so that it may be considered as a must. Hiltons algorithm enabled the implementation of image recognition technologies and some may argue based on McRO and Enfish that Hiltons algorithm patent would be preempting because it is impossible to implement image recognition technologies without this algorithm.

Even if an algorithm is a must-use for a technology, there is no reason to exclude it from patent protection. Patent law inevitably forecloses certain areas from further development by granting exclusive rights through patents. All patents foreclose competitors to some extent as a natural consequence of exclusive rights.

As stated in the Mayo judgment, exclusive rights provided by patents can impede the flow of information that might permit, indeed spur, invention, by, for example, raising the price of using the patented ideas once created, requiring potential users to conduct costly and time-consuming searches of existing patents and pending patent applications, and requiring the negotiation of complex licensing arrangements.

The exclusive right granted by a patents is only one side of the implicit agreement between the society and the inventor. In exchange for the benefit of the exclusivity, inventors are required to disclose their invention to the public so this knowledge becomes public, available for use in further research and for making new inventions building upon the previous one.

If inventors turn to trade secrets to protect their inventions due to the hostile approach of patent law to algorithmic inventions, the knowledge base in this field will narrow, making it harder to build upon previous technology. This may lead to the slow-down and even possible death of innovation in this industry.

The fact that an algorithm is a must-use, should not lead to the conclusion that it cannot be patented. Patent rights may even be granted for processes which have primary and even sole utility in research. Literally, a microscope is a basic tool for scientific work, but surely no one would assert that a new type of microscope lay beyond the scope of the patent system. Even if such a microscope is used widely and it is indispensable, it can still be given patent protection.

According to the approach embraced by McRO and BASCOM, while M.L. algorithms bringing a slight improvement, such as a higher accuracy and higher speed, can pass the eligibility test, algorithms paving the way for a whole new technology can be excluded from the benefits of patent protection simply because there are no alternatives to implement that revolutionary technology.

Considering that the goal of most AI inventions is to equip computers with new capabilities or bring qualitative improvements to abilities such as to see or to hear or even to make informed judgments without being fed complete information, most AI inventions would have the higher likelihood of being held patent ineligible. Applying this preemption test to M.L. algorithms would put such M.L. algorithms outside of patent protection.

Thus, a M.L. algorithm which increases accuracy by 1% may be eligible, while a ground-breaking M.L. algorithm which is a must-use because it covers all uses in that field may be excluded from patent protection. This would result in rewarding slight improvements with a patent but disregarding highly innovative and ground-breaking M.L. algorithms. Such a consequence is undesirable for the patent system.

This also may result in deterring the AI industry from bringing innovation in fundamental areas. As an undesired consequence, innovation efforts may shift to small improvements instead of innovations solving more complex problems.

Image Source:Author: nils.ackermann.gmail.comImage ID:102390038

More:
Effects of the Alice Preemption Test on Machine Learning Algorithms - IPWatchdog.com

Related Posts
This entry was posted in $1$s. Bookmark the permalink.