Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills – CNBC

Google Executive Chairman Eric Schmidt

Win McNamee | Getty Images

Schmidt Futures, the philanthropic foundation set up by billionaires Eric and Wendy Schmidt, is funding a new program at the University of Cambridge that's designed to equip young researchers with machine learning and artificial intelligence skills that have the potential to accelerate their research.

The initiative known as the Accelerate Program for Scientific Discovery will initially be aimed at researchers in science, technology, engineering, mathematics and medicine. However, it will eventually be available for those studying arts, humanities and social science.

Some 32 PhD students will receive machine-learning training through the program in the first year, the university said, adding that the number will rise to 160 over five years. The aim is to build a network of machine-learning experts across the university.

"Machine learning and AI are increasingly part of our day-to-day lives, but they aren't being used as effectively as they could be, due in part to major gaps of understanding between different research disciplines," Professor Neil Lawrence, a former Amazon director who will lead the program, said in a statement.

"This program will help us to close these gaps by training physicists, biologists, chemists and other scientists in the latest machine learning techniques, giving them the skills they need."

The scheme will be run by four new early-career specialists, who are in the process of being recruited.

The Schmidt Futures donation will be used partly to pay the salaries of this team, which will work with the university's Department of Computer Science and Technology and external companies.

Guest lectures will be provided by research scientists at DeepMind, the London-headquartered AI research lab that was acquired by Google.

The size of the donation from Schmidt Futures has not been disclosed.

"We are delighted to support this far-reaching program at Cambridge," said Stuart Feldman, chief scientist at Schmidt Futures, in a statement. "We expect it to accelerate the use of new techniques across the broad range of research as well as enhance the AI knowledge of a large number of early-stage researchers at this superb university."

Read more:
Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills - CNBC

Machine Learning as a Service (MLaaS) Market Overview, Cost Structure Analysis, Growth Opportunities and Forecast to 2026 – 3rd Watch News

Machine Learning as a Service (MLaaS) Market (2020)Report Provides an in-depth summary of Machine Learning as a Service (MLaaS) Market Status as well as Product Specification, Technology Development, and Key Manufacturers. The Report Gives Detail Analysis on Market concern Like Machine Learning as a Service (MLaaS) Market share, CAGR Status, Market demand and up to date Market Trends with key Market segments.

You Keep Your Social Distance And We Provide You A SocialDISCOUNTUseQUARANTINEDAYSCodeIn Precise Requirement AndGetFLAT $ 1,000 OFFOn AllCMI Reports

This Report Sample Includes

Get Sample Copy Of This Report @ https://www.coherentmarketinsights.com/insight/request-sample/3718

Analysis tools such as SWOT analysis and Porters five force model have been inculcated in order to present a perfect in-depth knowledge about Machine Learning as a Service (MLaaS) Market. tables, charts are added to help have an accurate understanding of this Machine Learning as a Service (MLaaS) Market. The Machine Learning as a Service (MLaaS) Market is also been analyzed in terms of value chain analysis and regulatory analysis.

Key players in global Machine Learning as a Service (MLaaS) Market include:H2O.ai, Google Inc., Predictron Labs Ltd, IBM Corporation, Ersatz Labs Inc., Microsoft Corporation, Yottamine Analytics, Amazon Web Services Inc., FICO, and BigML Inc.

Geographical Analysis:

The study details country-level aspects based on each segment and gives estimates in terms of market size. The key regional trends beneficial to the growth of the Machine Learning as a Service (MLaaS) market are discussed. Further, it analyzes the market potential for every nation. Geographic segmentation covered in the market report:

The study is a source of reliable data on:

What insights readers can gather from the Machine Learning as a Service (MLaaS) market report?

Note: Request Discount option enables you to get the discounts on the actual price of the report. Kindly fill the form, and one of our consultants would get in touch with you to discuss your allocated budget, and would provide discounts.

UseQUARANTINEDAYSCode In Precise Requirement And GetFLAT $OFFOnThisReports

Ask Discount Before Purchasing @ https://www.coherentmarketinsights.com/insight/request-discount/3718

The Machine Learning as a Service (MLaaS) market report answers the following queries:

In this study, the years considered to estimate the market size of Machine Learning as a Service (MLaaS) Market are as follows:

Aslo Checkout our latest Blog at: http://bit.ly/Sumit

Link:
Machine Learning as a Service (MLaaS) Market Overview, Cost Structure Analysis, Growth Opportunities and Forecast to 2026 - 3rd Watch News

If AI is going to help us in a crisis, we need a new kind of ethics – MIT Technology Review

What opportunities have we missed by not having these procedures in place?

Its easy to overhype whats possible, and AI was probably never going to play a huge role in this crisis. Machine-learning systems are not mature enough.

But there are a handful of cases in which AI is being tested for medical diagnosis or for resource allocation across hospitals. We might have been able to use those sorts of systems more widely, reducing some of the load on health care, had they been designed from the start with ethics in mind.

With resource allocation in particular, you are deciding which patients are highest priority. You need an ethical framework built in before you use AI to help with those kinds of decisions.

So is ethics for urgency simply a call to make existing AI ethics better?

Thats part of it. The fact that we dont have robust, practical processes for AI ethics makes things more difficult in a crisis scenario. But in times like this you also have greater need for transparency. People talk a lot about the lack of transparency with machine-learning systems as black boxes. But there is another kind of transparency, concerning how the systems are used.

This is especially important in a crisis, when governments and organizations are making urgent decisions that involve trade-offs. Whose health do you prioritize? How do you save lives without destroying the economy? If an AI is being used in public decision-making, transparency is more important than ever.

What needs to change?

We need to think about ethics differently. It shouldnt be something that happens on the side or afterwardssomething that slows you down. It should simply be part of how we build these systems in the first place: ethics by design.

I sometimes feel ethics is the wrong word. What were saying is that machine-learning researchers and engineers need to be trained to think through the implications of what theyre building, whether theyre doing fundamental research like designing a new reinforcement-learning algorithm or something more practical like developing a health-care application. If their work finds its way into real-world products and services, what might that look like? What kinds of issues might it raise?

Some of this has started already. We are working with some early-career AI researchers, talking to them about how to bring this way of thinking to their work. Its a bit of an experiment, to see what happens. But even NeurIPS [a leading AI conference] now asks researchers to include a statement at the end of their papers outlining potential societal impacts of their work.

Youve said that we need people with technical expertise at all levels of AI design and use. Why is that?

Im not saying that technical expertise is the be-all and end-all of ethics, but its a perspective that needs to be represented. And I dont want to sound like Im saying all the responsibility is on researchers, because a lot of the important decisions about how AI gets used are made further up the chain, by industry or by governments.

But I worry that the people who are making those decisions dont always fully understand the ways it might go wrong. So you need to involve people with technical expertise. Our intuitions about what AI can and cant do are not very reliable.

What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. Thats why its important for these different groups to get used to working together.

Youre pushing for a pretty big institutional and cultural overhaul. What makes you think people will want to do this rather than set up ethics boards or oversight committeeswhich always make me sigh a bit because they tend to be toothless?

Yeah, I also sigh. But I think this crisis is forcing people to see the importance of practical solutions. Maybe instead of saying, Oh, lets have this oversight board and that oversight board, people will be saying, We need to get this done, and we need to get it done properly.

Visit link:
If AI is going to help us in a crisis, we need a new kind of ethics - MIT Technology Review

Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts. – DocWire…

This article was originally published here

Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts.

PLoS Med. 2020 Jun;17(6):e1003149

Authors: Atabaki-Pasdar N, Ohlsson M, Viuela A, Frau F, Pomares-Millan H, Haid M, Jones AG, Thomas EL, Koivula RW, Kurbasic A, Mutie PM, Fitipaldi H, Fernandez J, Dawed AY, Giordano GN, Forgie IM, McDonald TJ, Rutters F, Cederberg H, Chabanova E, Dale M, Masi F, Thomas CE, Allin KH, Hansen TH, Heggie A, Hong MG, Elders PJM, Kennedy G, Kokkola T, Pedersen HK, Mahajan A, McEvoy D, Pattou F, Raverdy V, Hussler RS, Sharma S, Thomsen HS, Vangipurapu J, Vestergaard H, t Hart LM, Adamski J, Musholt PB, Brage S, Brunak S, Dermitzakis E, Frost G, Hansen T, Laakso M, Pedersen O, Ridderstrle M, Ruetten H, Hattersley AT, Walker M, Beulens JWJ, Mari A, Schwenk JM, Gupta R, McCarthy MI, Pearson ER, Bell JD, Pavo I, Franks PW

AbstractBACKGROUND: Non-alcoholic fatty liver disease (NAFLD) is highly prevalent and causes serious health complications in individuals with and without type 2 diabetes (T2D). Early diagnosis of NAFLD is important, as this can help prevent irreversible damage to the liver and, ultimately, hepatocellular carcinomas. We sought to expand etiological understanding and develop a diagnostic tool for NAFLD using machine learning.METHODS AND FINDINGS: We utilized the baseline data from IMI DIRECT, a multicenter prospective cohort study of 3,029 European-ancestry adults recently diagnosed with T2D (n = 795) or at high risk of developing the disease (n = 2,234). Multi-omics (genetic, transcriptomic, proteomic, and metabolomic) and clinical (liver enzymes and other serological biomarkers, anthropometry, measures of beta-cell function, insulin sensitivity, and lifestyle) data comprised the key input variables. The models were trained on MRI-image-derived liver fat content (<5% or 5%) available for 1,514 participants. We applied LASSO (least absolute shrinkage and selection operator) to select features from the different layers of omics data and random forest analysis to develop the models. The prediction models included clinical and omics variables separately or in combination. A model including all omics and clinical variables yielded a cross-validated receiver operating characteristic area under the curve (ROCAUC) of 0.84 (95% CI 0.82, 0.86; p < 0.001), which compared with a ROCAUC of 0.82 (95% CI 0.81, 0.83; p < 0.001) for a model including 9 clinically accessible variables. The IMI DIRECT prediction models outperformed existing noninvasive NAFLD prediction tools. One limitation is that these analyses were performed in adults of European ancestry residing in northern Europe, and it is unknown how well these findings will translate to people of other ancestries and exposed to environmental risk factors that differ from those of the present cohort. Another key limitation of this study is that the prediction was done on a binary outcome of liver fat quantity (<5% or 5%) rather than a continuous one.CONCLUSIONS: In this study, we developed several models with different combinations of clinical and omics data and identified biological features that appear to be associated with liver fat accumulation. In general, the clinical variables showed better prediction ability than the complex omics variables. However, the combination of omics and clinical variables yielded the highest accuracy. We have incorporated the developed clinical models into a web interface (see: https://www.predictliverfat.org/) and made it available to the community.TRIAL REGISTRATION: ClinicalTrials.gov NCT03814915.

PMID: 32559194 [PubMed as supplied by publisher]

The rest is here:
Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts. - DocWire...

Trending News Machine Learning in Finance Market Key Drivers, Key Countries, Regional Landscape and Share Analysis by 2025|Ignite Ltd,Yodlee,Trill…

The global Machine Learning in Finance Market is carefully researched in the report while largely concentrating on top players and their business tactics, geographical expansion, market segments, competitive landscape, manufacturing, and pricing and cost structures. Each section of the research study is specially prepared to explore key aspects of the global Machine Learning in Finance Market. For instance, the market dynamics section digs deep into the drivers, restraints, trends, and opportunities of the global Machine Learning in Finance Market. With qualitative and quantitative analysis, we help you with thorough and comprehensive research on the global Machine Learning in Finance Market. We have also focused on SWOT, PESTLE, and Porters Five Forces analyses of the global Machine Learning in Finance Market.

Leading players of the global Machine Learning in Finance Market are analyzed taking into account their market share, recent developments, new product launches, partnerships, mergers or acquisitions, and markets served. We also provide an exhaustive analysis of their product portfolios to explore the products and applications they concentrate on when operating in the global Machine Learning in Finance Market. Furthermore, the report offers two separate market forecasts one for the production side and another for the consumption side of the global Machine Learning in Finance Market. It also provides useful recommendations for new as well as established players of the global Machine Learning in Finance Market.

Final Machine Learning in Finance Report will add the analysis of the impact of COVID-19 on this Market.

Machine Learning in Finance Market competition by top manufacturers/Key player Profiled:

Ignite LtdYodleeTrill A.I.MindTitanAccentureZestFinance

Request for Sample Copy of This Report @https://www.reporthive.com/request_sample/2167901

With the slowdown in world economic growth, the Machine Learning in Finance industry has also suffered a certain impact, but still maintained a relatively optimistic growth, the past four years, Machine Learning in Finance market size to maintain the average annual growth rate of 15 from XXX million $ in 2014 to XXX million $ in 2019, This Report analysts believe that in the next few years, Machine Learning in Finance market size will be further expanded, we expect that by 2024, The market size of the Machine Learning in Finance will reach XXX million $.

Segmentation by Product:

Supervised LearningUnsupervised LearningSemi Supervised LearningReinforced Leaning

Segmentation by Application:

BanksSecurities Company

Competitive Analysis:

Global Machine Learning in Finance Market is highly fragmented and the major players have used various strategies such as new product launches, expansions, agreements, joint ventures, partnerships, acquisitions, and others to increase their footprints in this market. The report includes market shares of Machine Learning in Finance Market for Global, Europe, North America, Asia-Pacific, South America and Middle East & Africa.

Scope of the Report:The all-encompassing research weighs up on various aspects including but not limited to important industry definition, product applications, and product types. The pro-active approach towards analysis of investment feasibility, significant return on investment, supply chain management, import and export status, consumption volume and end-use offers more value to the overall statistics on the Machine Learning in Finance Market. All factors that help business owners identify the next leg for growth are presented through self-explanatory resources such as charts, tables, and graphic images.

Key Questions Answered:

Our industry professionals are working reluctantly to understand, assemble and timely deliver assessment on impact of COVID-19 disaster on many corporations and their clients to help them in taking excellent business decisions. We acknowledge everyone who is doing their part in this financial and healthcare crisis.

For Customised Template PDF Report:https://www.reporthive.com/request_customization/2167901

Table of Contents

Report Overview:It includes major players of the global Machine Learning in Finance Market covered in the research study, research scope, and Market segments by type, market segments by application, years considered for the research study, and objectives of the report.

Global Growth Trends:This section focuses on industry trends where market drivers and top market trends are shed light upon. It also provides growth rates of key producers operating in the global Machine Learning in Finance Market. Furthermore, it offers production and capacity analysis where marketing pricing trends, capacity, production, and production value of the global Machine Learning in Finance Market are discussed.

Market Share by Manufacturers:Here, the report provides details about revenue by manufacturers, production and capacity by manufacturers, price by manufacturers, expansion plans, mergers and acquisitions, and products, market entry dates, distribution, and market areas of key manufacturers.

Market Size by Type:This section concentrates on product type segments where production value market share, price, and production market share by product type are discussed.

Market Size by Application:Besides an overview of the global Machine Learning in Finance Market by application, it gives a study on the consumption in the global Machine Learning in Finance Market by application.

Production by Region:Here, the production value growth rate, production growth rate, import and export, and key players of each regional market are provided.

Consumption by Region:This section provides information on the consumption in each regional market studied in the report. The consumption is discussed on the basis of country, application, and product type.

Company Profiles:Almost all leading players of the global Machine Learning in Finance Market are profiled in this section. The analysts have provided information about their recent developments in the global Machine Learning in Finance Market, products, revenue, production, business, and company.

Market Forecast by Production:The production and production value forecasts included in this section are for the global Machine Learning in Finance Market as well as for key regional markets.

Market Forecast by Consumption:The consumption and consumption value forecasts included in this section are for the global Machine Learning in Finance Market as well as for key regional markets.

Value Chain and Sales Analysis:It deeply analyzes customers, distributors, sales channels, and value chain of the global Machine Learning in Finance Market.

Key Findings: This section gives a quick look at important findings of the research study.

About Us:Report Hive Research delivers strategic market research reports, statistical surveys, industry analysis and forecast data on products and services, markets and companies. Our clientele ranges mix of global business leaders, government organizations, SMEs, individuals and Start-ups, top management consulting firms, universities, etc. Our library of 700,000 + reports targets high growth emerging markets in the USA, Europe Middle East, Africa, Asia Pacific covering industries like IT, Telecom, Semiconductor, Chemical, Healthcare, Pharmaceutical, Energy and Power, Manufacturing, Automotive and Transportation, Food and Beverages, etc. This large collection of insightful reports assists clients to stay ahead of time and competition. We help in business decision-making on aspects such as market entry strategies, market sizing, market share analysis, sales and revenue, technology trends, competitive analysis, product portfolio, and application analysis, etc.

Contact Us:

Report Hive Research

500, North Michigan Avenue,

Suite 6014,

Chicago, IL 60611,

United States

Website: https://www.reporthive.com

Email: [emailprotected]

Phone: +1 312-604-7084

Here is the original post:
Trending News Machine Learning in Finance Market Key Drivers, Key Countries, Regional Landscape and Share Analysis by 2025|Ignite Ltd,Yodlee,Trill...

5 Reasons Artificial Intelligence Will Improve Greenhouse Production – Greenhouse Grower

Artificial intelligence (AI) involves using computers to do things that traditionally require human intelligence. This means creating algorithms to classify, analyze, and draw predictions from data. It also involves acting on data, learning from new data, and improving over time.

Thats the definition of AI, at least. But what does it actually mean for greenhouse growers?

According to Gursel Karacor, Senior Data Scientist at Grodan, a supplier of sustainable stone wool growing media solutions for the horticulture market, greenhouses will, to a large extent, be autonomous in the near future.

My mission is the realization of autonomous greenhouses through the use of all this data with state-of-the-art machine learning methodologies, Karacor says. I want to realize this goal step-by-step in five years.

Click here to learn more about why AI will change the way you work, for the better.

Gursel Karacor is a Senior Data Scientist with Grodan. See all author stories here.

Here is the original post:
5 Reasons Artificial Intelligence Will Improve Greenhouse Production - Greenhouse Grower

This Startup Is Trying to Foster an AI Art Scene in Korea – Adweek

A South Korean startup is holding a competition to fill one of the worlds first galleries for machine learning-generated art in a bid to foster a nascent artificial intelligence creativity scene in the country.

The company, Pulse9, which makes AI-powered graphics tools, is soliciting art pieces that make use of machine learning tech in some waywhether to produce an image out of whole cloth or restyle or supplement an artists workthrough the end of September.

The project is a notable addition to a burgeoning global community of technologists, new media artists and other creatives who are exploring the bounds of machine creativity through art, spurred by recent research advances that have made AI-generated content more realistic and elaborate than ever.

The medium had perhaps its biggest mainstream breakthrough in 2018, when Christies Auction House sold its first piece of AI-generated art for nearly half a million dollarsa classical style painting of a fictional character named Edmond de Belamy. That was also the moment that inspired the team at Pulse 9, which had just launched an AI tool to help draw and color a Korean style of digital comic called webtoons earlier that year.

We asked ourselves, Could we also sell paintings? and we started looking for art platform companies to work with, Pulse 9 spokesperson Yeongeun Park said.

The company teamed with an art platform called Art Together on a series of crowdfunded AI pieces that proved to be more popular than they had expectedone hit its goal a full week ahead of scheduleand the team began considering parlaying it into a bigger project.

With great attention from the public and the good funding results, we gained confidence in pioneering the Korean AI art market, Park said. So, we eventually decided to open our own AI art gallery.

The company acknowledges that questions of authorship and originality still hang over the concept of AI art but stresses that the gallery is about collaboration between humans and technology rather than AI simply replacing artists. Even pieces generated entirely by machines require a host of human touches, whether its curating a collection of visuals for training or adjusting training regimens to achieve a desired results.

The theme of this competition is Can AI art enhance human artistic creativity?' Park said. We hope that this competition will also be an opportunity to discover creative, competent and new artists who would like to engage AI tools as a new artistic medium in their artwork.

The goal is to establish AIA Gallery as a well-recognized institution in the art world and educate people on the potential for AI-powered creativity. The organizers hope the process will also inspire other efforts and create an AI creativity hub in the country.

Groups or communities of AI artists have formed and are gradually growing, especially overseas, Park said. In the case of Korea, the AI Art market has not been well-recognized yet, but weve been continuing to play our role with our own initiative.

The AIA Gallery recently partnered with one of the leading startups in the new space, Playform, which is led by Rutgers University Art and AI Lab director Ahmed Elgammal (after learning about the company from an Adweek article).

Progress in generative AI creativity isnt confined to the art world, either. Agencies have started to experiment with various AI-generated graphics in campaigns, and brands have filed a slew of patent applications around the central technology powering the revolutiona neural net structure called a generative adversarial network.

See the original post:
This Startup Is Trying to Foster an AI Art Scene in Korea - Adweek

Effects of the Alice Preemption Test on Machine Learning Algorithms – IPWatchdog.com

According to the approach embraced by McRO and BASCOM, while machine learning algorithms bringing a slight improvement can pass the eligibility test, algorithms paving the way for a whole new technology can be excluded from the benefits of patent protection simply because there are no alternatives.

In the past decade or so, humanity has gone through drastic changes as Artificial intelligence (AI) technologies such as recommendation systems and voice assistants have seeped into every facet of our lives. Whereas the number of patent applications for AI inventions skyrocketed, almost a third of these applications are rejected by the U.S. Patent and Trademark Office (USPTO) and the majority of these rejections are due to the claimed invention being ineligible subject matter.

The inventive concept may be attributed to different components of machine learning technologies, such as using a new algorithm, feeding more data, or using a new hardware component. However, this article will exclusively focus on the inventions achieved by Machine Learning (M.L.) algorithms and the effect of the preemption test adopted by U.S. courts on the patent-eligibility of such algorithms.

Since the Alice decision, the U.S. courts have adopted different views related to the role of the preemption test in eligibility analysis. While some courts have ruled that lack of preemption of abstract ideas does not make an invention patent-eligible [Ariosa Diagnostics Inc. v. Sequenom Inc.], others have not referred to it at all in their patent eligibility analysis. [Enfish LLC v. Microsoft Corp., 822 F.3d 1327]

Contrary to those examples, recent cases from Federal Courts have used the preemption test as the primary guidance to decide patent eligibility.

In McRO, the Federal Circuit ruled that the algorithms in the patent application prevent pre-emption of all processes for achieving automated lip-synchronization of 3-D characters. The court based this conclusion on the evidence of availability of an alternative set of rules to achieve the automation process other than the patented method. It held that the patent was directed to a specific structure to automate the synchronization and did not preempt the use of all of the rules for this method given that different sets of rules to achieve the same automated synchronization could be implemented by others.

Similarly, The Court in BASCOM ruled that the claims were patent eligible because they recited a specific, discrete implementation of the abstract idea of filtering contentand they do not preempt all possible ways to implement the image-filtering technology.

The analysis of the McRO and BASCOM cases reveals two important principles for the preemption analysis:

Machine learning can be defined as a mechanism which searches for patterns and which feeds intelligence into a machine so that it can learn from its own experience without explicit programming. Although the common belief is that data is the most important component in machine learning technologies, machine learning algorithms are equally important to proper functioning of these technologies and their importance cannot be understated.

Therefore, inventive concepts enabled by new algorithms can be vital to the effective functioning of machine learning systemsenabling new capabilities, making systems faster or more energy efficient are examples of this. These inventions are likely to be the subject of patent applications. However, the preemption test adopted by courts in the above-mentioned cases may lead to certain types of machine learning algorithms being held ineligible subject matter. Below are some possible scenarios.

The first situation relates to new capabilities enabled by M.L. algorithms. When a new machine learning algorithm adds a new capability or enables the implementation of a process, such as image recognition, for the first time, preemption concerns will likely arise. If the patented algorithm is indispensable for the implementation of that technology, it may be held ineligible based on the McRO case. This is because there are no other alternative means to use this technology and others would be prevented from using this basic tool for further development.

For example, a M.L. algorithm which enabled the lane detection capability in driverless cars may be a standard/must-use algorithm in the implementation of driverless cars that the court may deem patent ineligible for having preemptive effects. This algorithm clearly equips the computer vision technology with a new capability, namely, the capability to detect boundaries of road lanes. Implementation of this new feature on driverless cars would not pass the Alice test because a car is a generic tool, like a computer, and even limiting it to a specific application may not be sufficient because it will preempt all uses in this field.

Should the guidance of McRO and BASCOM be followed, algorithms that add new capabilities and features may be excluded from patent protection simply because there are no other available alternatives to these algorithms to implement the new capabilities. These algorithms use may be so indispensable for the implementation of that technology that they are deemed to create preemptive effects.

Secondly, M.L. algorithms which are revolutionary may also face eligibility challenges.

The history of how deep neural networks have developed will be explained to demonstrate how highly-innovative algorithms may be stripped of patent protection because of the preemption test embraced by McRO and subsequent case law.

Deep Belief Networks (DBNs) is a type of Artificial Neural Networks (ANNs). The ANNs were trained with a back-propagation algorithm, which adjusts weights by propagating the outputerror backwardsthrough the network However, the problem with the ANNs was that as the depth was increased by adding more layers, the error vanished to zero and this severely affected the overall performance, resulting in less accuracy.

From the early 2000s, there has been a resurgence in the field of ANNs owing to two major developments: increased processing power and more efficient training algorithms which made trainingdeep architecturesfeasible. The ground-breaking algorithm which enabled the further development of ANNs in general and DBNs in particular was Hintons greedy training algorithm.

Thanks to this new algorithm, DBNs has been applicable to solve a variety of problems that were the roadblock before the use of new technologies, such as image processing,natural language processing, automatic speech recognition, andfeature extractionand reduction.

As can be seen, the Hiltons fast learning algorithm revolutionized the field of machine learning because it made the learning easier and, as a result, technologies such as image processing and speech recognition have gone mainstream.

If patented and challenged at court, Hiltons algorithm would likely be invalidated considering previous case law. In McRO, the court reasoned that the algorithm at issue should not be invalidated because the use of a set of rules within the algorithm is not a must and other methods can be developed and used. Hiltons algorithm will inevitably preempt some AI developers from engaging with further development of DBNs technologies because this algorithm is a base algorithm, which made the DBNs plausible to implement so that it may be considered as a must. Hiltons algorithm enabled the implementation of image recognition technologies and some may argue based on McRO and Enfish that Hiltons algorithm patent would be preempting because it is impossible to implement image recognition technologies without this algorithm.

Even if an algorithm is a must-use for a technology, there is no reason to exclude it from patent protection. Patent law inevitably forecloses certain areas from further development by granting exclusive rights through patents. All patents foreclose competitors to some extent as a natural consequence of exclusive rights.

As stated in the Mayo judgment, exclusive rights provided by patents can impede the flow of information that might permit, indeed spur, invention, by, for example, raising the price of using the patented ideas once created, requiring potential users to conduct costly and time-consuming searches of existing patents and pending patent applications, and requiring the negotiation of complex licensing arrangements.

The exclusive right granted by a patents is only one side of the implicit agreement between the society and the inventor. In exchange for the benefit of the exclusivity, inventors are required to disclose their invention to the public so this knowledge becomes public, available for use in further research and for making new inventions building upon the previous one.

If inventors turn to trade secrets to protect their inventions due to the hostile approach of patent law to algorithmic inventions, the knowledge base in this field will narrow, making it harder to build upon previous technology. This may lead to the slow-down and even possible death of innovation in this industry.

The fact that an algorithm is a must-use, should not lead to the conclusion that it cannot be patented. Patent rights may even be granted for processes which have primary and even sole utility in research. Literally, a microscope is a basic tool for scientific work, but surely no one would assert that a new type of microscope lay beyond the scope of the patent system. Even if such a microscope is used widely and it is indispensable, it can still be given patent protection.

According to the approach embraced by McRO and BASCOM, while M.L. algorithms bringing a slight improvement, such as a higher accuracy and higher speed, can pass the eligibility test, algorithms paving the way for a whole new technology can be excluded from the benefits of patent protection simply because there are no alternatives to implement that revolutionary technology.

Considering that the goal of most AI inventions is to equip computers with new capabilities or bring qualitative improvements to abilities such as to see or to hear or even to make informed judgments without being fed complete information, most AI inventions would have the higher likelihood of being held patent ineligible. Applying this preemption test to M.L. algorithms would put such M.L. algorithms outside of patent protection.

Thus, a M.L. algorithm which increases accuracy by 1% may be eligible, while a ground-breaking M.L. algorithm which is a must-use because it covers all uses in that field may be excluded from patent protection. This would result in rewarding slight improvements with a patent but disregarding highly innovative and ground-breaking M.L. algorithms. Such a consequence is undesirable for the patent system.

This also may result in deterring the AI industry from bringing innovation in fundamental areas. As an undesired consequence, innovation efforts may shift to small improvements instead of innovations solving more complex problems.

Image Source:Author: nils.ackermann.gmail.comImage ID:102390038

More:
Effects of the Alice Preemption Test on Machine Learning Algorithms - IPWatchdog.com

Deploying Machine Learning Has Never Been This Easy – Analytics India Magazine

According to PwC, AIs potential global economic impact will reach USD 15.7 trillion by 2030. However, the enterprises who look to deploy AI are often hampered by the lack of time, trust and talent. Especially, with the highly regulated sectors such as healthcare and finance, convincing the customers to imbibe AI methodologies is an uphill task.

Of late, the AI community has seen a sporadic shift in AI adoption with the advent of AutoML tools and introduction of customised hardware to cater to the needs of the algorithms. One of the most widely used AutoML tools in the industry is H2O Driverless AI. And, when it comes to hardware Intel has been consistently updating its tool stack to meet the high computational demands of the AI workflows.

Now H2O.ai and Intel, two companies who have been spearheading the democratisation of the AI movement, join hands to develop solutions that leverage software and hardware capabilities respectively.

AI and machine-learning workflows are complex and enterprises need more confidence in the validity of their AI models than a typical black-box environment can provide. The inexplicability and the complexity of feature engineering can be daunting to the non-experts. So far AutoML has proven to be the one stop solution to all these problems. These tools have alleviated the challenges by providing automated workflows, code ready deployable models and many more.

H2O.ai especially, has pioneered in the AutoML segment. They have developed an open source, distributed in-memory machine learning platform with linear scalability that includes a module called H2OAutoML, which can be used for automating the machine learning workflow, that includes automatic training and tuning of many models within a user-specified time-limit.

Whereas, H2O.ais flagship product Driverless AI can be used to fully automate some of the most challenging and productive tasks in applied data science such as feature engineering, model tuning, model ensembling and model deployment.

But, for these AI based tools to work seamlessly, they need the backing of hardware that is dedicated to handle the computational intensity of machine learning operations.

Intel has been at the forefront of digital revolution for over half a century. Today, Intel flaunts a wide range of technologies, including its Xeon Scalable processors, Optane Solid State Drives and optimized Intel software libraries that bring in a much needed mix of enhanced performance, AI inference, network functions, persistent memory bandwidth, and security.

Integrating H2O.ais software portfolio with hardware and software technologies from Intel has resulted in solutions that can handle almost all the woes of an AI enterprise from automated workflows to explainability to production ready code that can be deployed anywhere.

For example, H2O Driverless AI, an automatic machine-learning platform enables data science experts and beginners to streamline their AI tasks within minutes that usually take months. Today, more than 18,000 companies use open source H2O in mission-critical use cases for finance, insurance, healthcare, retail, telco, sales, and marketing.

The software capabilities of H2O.ai combined with hardware infrastructure of Intel, that includes 2nd Generation Xeon Scalable processors, Optane Solid State Drives and Ethernet Network Adapters, can empower enterprises to optimize performance and accelerate deployment.

Enterprises that are looking for increasing productivity while increasing the business value of to enjoy the competitive advantages of AI innovation no longer have to wait thanks to hardware backed AutoML solutions.

comments

Visit link:
Deploying Machine Learning Has Never Been This Easy - Analytics India Magazine

Googles latest experiment is Keen, an automated, machine-learning based version of Pinterest – TechCrunch

A new project called Keen is launching today from Googles in-house incubator for new ideas, Area 120, to help users track their interests. The app is like a modern rethinking of the Google Alerts service, which allows users to monitor the web for specific content. Except instead of sending emails about new Google Search results, Keen leverages a combination of machine learning techniques and human collaboration to help users curate content around a topic.

Each individual area of interest is called a keen a word often used to reference someone with an intellectual quickness.

The idea for the project came about after co-founder C.J. Adams realized he was spending too much time on his phone mindlessly browsing feeds and images to fill his downtime. He realized that time could be better spent learning more about a topic he was interested in perhaps something he always wanted to research more or a skill he wanted to learn.

To explore this idea, he and four colleagues at Google worked in collaboration with the companys People and AI Research (PAIR) team, which focuses on human-centered machine learning, to create what has now become Keen.

To use Keen, which is available both on the web and on Android, you first sign in with your Google account and enter in a topic you want to research. This could be something like learning to bake bread, bird watching or learning about typography, suggests Adams in an announcement about the new project.

Keen may suggest additional topics related to your interest. For example, type in dog training and Keen could suggest dog training classes, dog training books, dog training tricks, dog training videos and so on. Click on the suggestions you want to track and your keen is created.

When you return to the keen, youll find a pinboard of images linking to web content that matches your interests. In the dog training example, Keen found articles and YouTube videos, blog posts featuring curated lists of resources, an Amazon link to dog training treats and more.

For every collection, the service uses Google Search and machine learning to help discover more content related to the given interest. The more you add to a keen and organize it, the better these recommendations become.

Its like an automated version of Pinterest, in fact.

Once a keen is created, you can then optionally add to the collection, remove items you dont want and share the Keen with others to allow them to also add content. The resulting collection can be either public or private. Keen can also email you alerts when new content is available.

Google, to some extent, already uses similar techniques to power its news feed in the Google app. The feed, in that case, uses a combination of items from your Google Search history and topics you explicitly follow to find news and information it can deliver to you directly on the Google apps home screen. Keen, however, isnt tapping into your search history. Its only pulling content based on interests you directly input.

And unlike the news feed, a keen isnt necessarily focused only on recent items. Any sort of informative, helpful information about the topic can be returned. This can include relevant websites, events, videos and even products.

But as a Google project and one that asks you to authenticate with your Google login the data it collects is shared with Google. Keen, like anything else at Google, is governed by the companys privacy policy.

Though Keen today is a small project inside a big company, it represents another step toward the continued personalization of the web. Tech companies long since realized that connecting users with more of the content that interests them increases their engagement, session length, retention and their positive sentiment for the service in question.

But personalization, unchecked, limits users exposure to new information or dissenting opinions. It narrows a persons worldview. It creates filter bubbles and echo chambers. Algorithmic-based recommendations can send users searching for fringe content further down dangerous rabbit holes, even radicalizing them over time. And in extreme cases, radicalized individuals become terrorists.

Keen would be a better idea if it were pairing machine-learning with topical experts. But it doesnt add a layer of human expertise on top of its tech, beyond those friends and family you specifically invite to collaborate, if you even choose to. That leaves the system wanting for better human editorial curation, and perhaps the need for a narrower focus to start.

Visit link:
Googles latest experiment is Keen, an automated, machine-learning based version of Pinterest - TechCrunch