How Bruce K. Shibuya Is Changing The Game Of Business Intelligence In The Auto Industry – Yahoo Finance

SANTA CLARA, CA / ACCESSWIRE / March 23, 2020 / Bruce K. Shibuya has an impressive resume, but one aspect stands out above the rest: he's an expert in business intelligence, and he's proving that day after day. Bruce Shibuya's a mover and shaker in the automobile industry and has transformed the approach to artificial intelligence and machine learning.

Using predictive analytics and quality applications, Bruce K. Shibuya is working to change how business is done in auto manufacturing plants. Bruce. K. Shibuya works to use the data collected to identify unusual trends. This allows Shibuya and his team to find areas of manufacturing that aren't working well, and change them to create better business outcomes.

Instead of taking the standard approach of looking at what's happening currently in manufacturing, Bruce K. Shibuya works to use historical data to understand what created the current manufacturing situation within an industry. Armed with this information, he's able to make decisions that positively affect the manufacturing plant moving forward.

In order to move forward with the technology currently available, Bruce K. Shibuya believes it's time to focus on using data to inform machine learning. Machine learning is a relatively new field of artificial intelligence, and few people are pioneering the charge like Shibuya.

Using the approach of analyzing historical data, Bruce K. Shibuya is able to solve manufacturing, design, and supply chain issues. Problems within manufacturing that typically take months to solve are able to be remedied in days.

As the Senior Director of Quality Engineering at Jabil, Bruce K. Shibuya's unique approach to business intelligence, artificial intelligence, and machine learning are being used to make widespread changes in the auto industry.

This is nothing new for Bruce K. Shibuya. In 2004, Hyundai outranked Toyota in JD Power & Associates for the first time ever, while Bruce K. Shibuya was serving as the vice president of quality at Hyundai. Shibuya's business intelligence program was built in partnership with Microsoft and continues to inform business decisions in the auto industry today.

After serving as an executive engineer for Toyota, Bruce K. Shibuya was awarded the Toyota Executive Management Award for attention to detail.

The auto industry is changing quickly, in no small part to contributions from Bruce K. Shibuya. As artificial intelligence and machine learning continue to play large roles in the auto development process, it's expected that the contributions from Bruce K. Shibuya will continue to prove invaluable.

CONTACT:

Caroline HunterWeb Presence, LLC+1 7865519491

SOURCE: Web Presence, LLC

View source version on accesswire.com: https://www.accesswire.com/582175/How-Bruce-K-Shibuya-Is-Changing-The-Game-Of-Business-Intelligence-In-The-Auto-Industry

See the original post here:
How Bruce K. Shibuya Is Changing The Game Of Business Intelligence In The Auto Industry - Yahoo Finance

Google Teaches AI To Play The Game Of Chip Design – The Next Platform

If it wasnt bad enough that Moores Law improvements in the density and cost of transistors is slowing. At the same time, the cost of designing chips and of the factories that are used to etch them is also on the rise. Any savings on any of these fronts will be most welcome to keep IT innovation leaping ahead.

One of the promising frontiers of research right now in chip design is using machine learning techniques to actually help with some of the tasks in the design process. We will be discussing this at our upcoming The Next AI Platform event in San Jose on March 10 with Elias Fallon, engineering director at Cadence Design Systems. (You can see the full agenda and register to attend at this link; we hope to see you there.) The use of machine learning in chip design was also one of the topics that Jeff Dean, a senior fellow in the Research Group at Google who has helped invent many of the hyperscalers key technologies, talked about in his keynote address at this weeks 2020 International Solid State Circuits Conference in San Francisco.

Google, as it turns out, has more than a passing interest in compute engines, being one of the large consumers of CPUs and GPUs in the world and also the designer of TPUs spanning from the edge to the datacenter for doing both machine learning inference and training. So this is not just an academic exercise for the search engine giant and public cloud contender particularly if it intends to keep advancing its TPU roadmap and if it decides, like rival Amazon Web Services, to start designing its own custom Arm server chips or decides to do custom Arm chips for its phones and other consumer devices.

With a certain amount of serendipity, some of the work that Google has been doing to run machine learning models across large numbers of different types of compute engines is feeding back into the work that it is doing to automate some of the placement and routing of IP blocks on an ASIC. (It is wonderful when an idea is fractal like that. . . .)

While the pod of TPUv3 systems that Google showed off back in May 2018 can mesh together 1,024 of the tensor processors (which had twice as many cores and about a 15 percent clock speed boost as far as we can tell) to deliver 106 petaflops of aggregate 16-bit half precision multiplication performance (with 32-bit accumulation) using Googles own and very clever bfloat16 data format. Those TPUv3 chips are all cross-coupled using a 3232 toroidal mesh so they can share data, and each TPUv3 core has its own bank of HBM2 memory. This TPUv3 pod is a huge aggregation of compute, which can do either machine learning training or inference, but it is not necessarily as large as Google needs to build. (We will be talking about Deans comments on the future of AI hardware and models in a separate story.)

Suffice it to say, Google is hedging with hybrid architectures that mix CPUs and GPUs and perhaps someday other accelerators for reinforcement learning workloads, and hence the research that Dean and his peers at Google have been involved in that are also being brought to bear on ASIC design.

One of the trends is that models are getting bigger, explains Dean. So the entire model doesnt necessarily fit on a single chip. If you have essentially large models, then model parallelism dividing the model up across multiple chips is important, and getting good performance by giving it a bunch of compute devices is non-trivial and it is not obvious how to do that effectively.

It is not as simple as taking the Message Passing Interface (MPI) that is used to dispatch work on massively parallel supercomputers and hacking it onto a machine learning framework like TensorFlow because of the heterogeneous nature of AI iron. But that might have been an interesting way to spread machine learning training workloads over a lot of compute elements, and some have done this. Google, like other hyperscalers, tends to build its own frameworks and protocols and datastores, informed by other technologies, of course.

Device placement meaning, putting the right neural network (or portion of the code that embodies it) on the right device at the right time for maximum throughput in the overall application is particularly important as neural network models get bigger than the memory space and the compute oomph of a single CPU, GPU, or TPU. And the problem is getting worse faster than the frameworks and hardware can keep up. Take a look:

The number of parameters just keeps growing and the number of devices being used in parallel also keeps growing. In fact, getting 128 GPUs or 128 TPUv3 processors (which is how you get the 512 cores in the chart above) to work in concert is quite an accomplishment, and is on par with the best that supercomputers could do back in the era before loosely coupled, massively parallel supercomputers using MPI took over and federated NUMA servers with actual shared memory were the norm in HPC more than two decades ago. As more and more devices are going to be lashed together in some fashion to handle these models, Google has been experimenting with using reinforcement learning (RL), a special subset of machine learning, to figure out where to best run neural network models at any given time as model ensembles are running on a collection of CPUs and GPUs. In this case, an initial policy is set for dispatching neural network models for processing, and the results are then fed back into the model for further adaptation, moving it toward more and more efficient running of those models.

In 2017, Google trained an RL model to do this work (you can see the paper here) and here is what the resulting placement looked like for the encoder and decoder, and the RL model to place the work on the two CPUs and four GPUs in the system under test ended up with 19.3 percent lower runtime for the training runs compared to the manually placed neural networks done by a human expert. Dean added that this RL-based placement of neural network work on the compute engines does kind of non-intuitive things to achieve that result, which is what seems to be the case with a lot of machine learning applications that, nonetheless, work as well or better than humans doing the same tasks. The issue is that it cant take a lot of RL compute oomph to place the work on the devices to run the neural networks that are being trained themselves. In 2018, Google did research to show how to scale computational graphs to over 80,000 operations (nodes), and last year, Google created what it calls a generalized device placement scheme for dataflow graphs with over 50,000 operations (nodes).

Then we start to think about using this instead of using it to place software computation on different computational devices, we started to think about it for could we use this to do placement and routing in ASIC chip design because the problems, if you squint at them, sort of look similar, says Dean. Reinforcement learning works really well for hard problems with clear rules like Chess or Go, and essentially we started asking ourselves: Can we get a reinforcement learning model to successfully play the game of ASIC chip layout?

There are a couple of challenges to doing this, according to Dean. For one thing, chess and Go both have a single objective, which is to win the game and not lose the game. (They are two sides of the same coin.) With the placement of IP blocks on an ASIC and the routing between them, there is not a simple win or lose and there are many objectives that you care about, such as area, timing, congestion, design rules, and so on. Even more daunting is the fact that the number of potential states that have to be managed by the neural network model for IP block placement is enormous, as this chart below shows:

Finally, the true reward function that drives the placement of IP blocks, which runs in EDA tools, takes many hours to run.

And so we have an architecture Im not going to get a lot of detail but essentially it tries to take a bunch of things that make up a chip design and then try to place them on the wafer, explains Dean, and he showed off some results of placing IP blocks on a low-powered machine learning accelerator chip (we presume this is the edge TPU that Google has created for its smartphones), with some areas intentionally blurred to keep us from learning the details of that chip. We have had a team of human experts places this IP block and they had a couple of proxy reward functions that are very cheap for us to evaluate; we evaluated them in two seconds instead of hours, which is really important because reinforcement learning is one where you iterate many times. So we have a machine learning-based placement system, and what you can see is that it sort of spreads out the logic a bit more rather than having it in quite such a rectangular area, and that has enabled it to get improvements in both congestion and wire length. And we have got comparable or superhuman results on all the different IP blocks that we have tried so far.

Note: I am not sure we want to call AI algorithms superhuman. At least if you dont want to have it banned.

Anyway, here is how that low-powered machine learning accelerator for the RL network versus people doing the IP block placement:

And here is a table that shows the difference between doing the placing and routing by hand and automating it with machine learning:

And finally, here is how the IP block on the TPU chip was handled by the RL network compared to the humans:

Look at how organic these AI-created IP blocks look compared to the Cartesian ones designed by humans. Fascinating.

Now having done this, Google then asked this question: Can we train a general agent that is quickly effective at placing a new design that it has never seen before? Which is precisely the point when you are making a new chip. So Google tested this generalized model against four different IP blocks from the TPU architecture and then also on the Ariane RISC-V processor architecture. This data pits people working with commercial tools and various levels tuning on the model:

And here is some more data on the placement and routing done on the Ariane RISC-V chips:

You can see that experience on other designs actually improves the results significantly, so essentially in twelve hours you can get the darkest blue bar, Dean says, referring to the first chart above, and then continues with the second chart above. And this graph showing the wireline costs where we see if you train from scratch, it actually takes the system a little while before it sort of makes some breakthrough insight and was able to significantly drop the wiring cost, where the pretrained policy has some general intuitions about chip design from seeing other designs and people that get to that level very quickly.

Just like we do ensembles of simulations to do better weather forecasting, Dean says that this kind of AI-juiced placement and routing of IP block sin chip design could be used to quickly generate many different layouts, with different tradeoffs. And in the event that some feature needs to be added, the AI-juiced chip design game could re-do a layout quickly, not taking months to do it.

And most importantly, this automated design assistance could radically drop the cost of creating new chips. These costs are going up exponentially, and data we have seen (thanks to IT industry luminary and Arista Networks chairman and chief technology officer Andy Bechtolsheim), an advanced chip design using 16 nanometer processes cost an average of $106.3 million, shifting to 10 nanometers pushed that up to $174.4 million, and the move to 7 nanometers costs $297.8 million, with projections for 5 nanometer chips to be on the order of $542.2 million. Nearly half of that cost has been and continues to be for software. So we know where to target some of those costs, and machine learning can help.

The question is will the chip design software makers embed AI and foster an explosion in chip designs that can be truly called Cambrian, and then make it up in volume like the rest of us have to do in our work? It will be interesting to see what happens here, and how research like that being done by Google will help.

Read the rest here:
Google Teaches AI To Play The Game Of Chip Design - The Next Platform

Machine learning finds a novel antibiotic able to kill superbugs – STAT – STAT

For decades, discovering novel antibiotics meant digging through the same patch of dirt. Biologists spent countless hours screening soil-dwelling microbes for properties known to kill harmful bacteria. But as superbugs resistant to existing antibiotics have spread widely, breakthroughs were becoming as rare as new places to dig.

Now, artificial intelligence is giving scientists a reason to dramatically expand their search into databases of molecules that look nothing like existing antibiotics.

A study published Thursday in the journal Cell describes how researchers at the Massachusetts Institute of Technology used machine learning to identify a molecule that appears capable of countering some of the worlds most formidable pathogens.

advertisement

When tested in mice, the molecule, dubbed halicin, effectively treated the gastrointestinal bug Clostridium difficile (C. diff), a common killer of hospitalized patients, and another type of drug-resistant bacteria that often causes infections in the blood, urinary tract, and lungs.

The most surprising feature of the molecule? It is structurally distinct from existing antibiotics, the researchers said. It was found in a drug-repurposing database where it was initially identified as a possible treatment for diabetes, a feat that showcases the power of machine learning to support discovery efforts.

Now were finding leads among chemical structures that in the past we wouldnt have even hallucinated that those could be an antibiotic, said Nigam Shah, professor of biomedical informatics at Stanford University. It greatly expands the search space into dimensions we never knew existed.

Shah, who was not involved in the research, said that the generation of a promising molecule is just the first step in a long and uncertain process of testing its safety and effectiveness in humans.

But the research demonstrates how machine learning, when paired with expert biologists, can speed up time-consuming preclinical work, and give researchers greater confidence that the molecule theyre examining is worth pursuing through more costly phases of drug discovery.

That is an especially pressing challenge in the development of new antibiotics, because a lack of economic incentives has caused pharmaceutical companies to pull back from the search for badly needed treatments. Each year in the U.S., drug-resistant bacteria and fungi cause more than 2.8 million infections and 35,000 deaths, with more than a third of fatalities attributable to C. diff, according to the the Centers for Disease Control and Prevention.

The damage is far greater in countries with fewer health care resources.

Without the development of novel antibiotics, the World Health Organization estimates that the global death toll from drug resistant infections is expected to rise to 10 million a year by 2050, up from about 700,000 a year currently.

In addition to finding halicin, the researchers at MIT reported that their machine learning model identified eight other antibacterial compounds whose structures differ significantly from known antibiotics.

I do think this platform will very directly reduce the cost involved in the discovery phase of antibiotic development, said James Collins, a co-author of the study who is a professor of bioengineering at MIT. With these models, one can now get after novel chemistries in a shorter period of time involving less investment.

The machine learning platform was developed by Regina Barzilay, a professor of computer science and artificial intelligence who works with Collins as co-lead of the Jameel Clinic for Machine Learning in Health at MIT. It relies on a deep neural network, a type of AI architecture that uses multiple processing layers to analyze different aspects of data to deliver an output.

Prior types of machine learning systems required close supervision from humans to analyze molecular properties in drug discovery and produced spotty results. But Barzilays model is part of a new generation of machine learning systems that can automatically learn chemical properties connected to a specific function, such as an ability to kill bacteria.

Barzilay worked with Collins and other biologists at MIT to train the system on more than 2,500 chemical structures, including those that looked nothing like antibiotics. The effect was to counteract bias that typically trips up most human scientists who are trained to look for molecular structures that look a lot like other antibiotics.

The neural net was able to isolate molecules that were predicted to have antibacterial qualities but didnt look like existing antibiotics, resulting in the identification of halicin.

To use a crude analogy, its like you show an AI all the different means of transportation, but youve not shown it an electric scooter, said Shah, the bioinformatics professor at Stanford. And then it independently looks at an electronic scooter and says, Yeah, this could be useful for transportation.

In follow-up testing in the lab, Collins said, halicin displayed a remarkable ability to fight a wide range of multidrug-resistant pathogens. Tested against 36 such pathogens, it displayed potency against 35 of them. Collins said testing in mice showed excellent activity against C. diff, tuberculosis, and other bacteria.

The ability to identify molecules with specific antibiotic properties could aid in the development of drugs to treat so-called orphan conditions that affect a small percentage of the population but are not targeted by drug companies because of the lack of financial rewards.

Collins noted that commercializing halicin would take many months of study to evaluate its toxicity in humans, followed by multiple phases of clinical trials to establish safety and efficacy.

Read the original post:
Machine learning finds a novel antibiotic able to kill superbugs - STAT - STAT

The Connection Between Astrology And Your Tesla AutoDrive – Forbes

Preamble: Intermittently, I will be introducing some columns which introduce some seemingly outlandish concepts. The purpose is a bit of humor, but also to provoke some thought. Enjoy.

Zodiac signs inside of horoscope circle.

Historically, astrology has been a major component of the cultural life in many major civilizations. Significant events such as marriage, moving into a new home, or even travel were planned with astrology in mind. Even in modern times, astrological internet sites enjoy great success and the gurus of the art publish in major newspapers.

Of course, with the advent of scientific methods and formal education, astrology has rapidly lost favor in intellectual society. After all, what could possibly be the causal relationship between the movement of planets and whether someone will get a job promotion? As some have pointed out, even if there was a relationship, the configuration of the stars change, so how could the predictions of the past possibly be valid ?

Pure poppycock. Right? Perhaps. Lets take a deeper look.

Lets consider the central technology at the apex of current intellectual achievement : machine learning. Machine learning is the engine underlying important technologies such as autonomous vehicles including Teslas AutoDrive. What is machine learning at its core? One looks at massive amounts of data and trains a computational engine (ML engine). This ML engine is then used to make future predictions. Sometimes, the training is done in a constrained manner where one looks at particular items, and other times, the training is left unconstrained. Machine learning and the associated field of Artificial Intelligence (AI) is at the forefront of computer science research. Indeed, as we have discussed in past articles, AI is considered to be the next big economic mega-driver in a vast number of markets. After looking at machine learning, an interesting thought comes to mind.

Was astrology really just machine learning done by humans?

Could the thought leaders from great civilizations have looked at large amounts of human behavioral data and used something very reasonable (planetary movements) to train the astrology engine? After all, what really is the difference between machine learning and astrology?

Marketing Chart Comparing Astrology and Machine Learning

Both astrology and machine learning seem to have a concept of training. In astrology, the astrological signs are used as points of interest, and seemingly arbitrary connections are made to individual human circumstances. Even without the understanding of causality, the correlations can be somewhat true. In machine learning, data correlations are discovered, and there is no requirement of causation. This thought process is central to the machine learning paradigm, and gives it much of its power. In fact, as the chart above shows, there are uncomfortable levels of parallels between astrology and machine learning.

What does this mean? Should we take machine learning a little less seriously? Certainly, some caution is warranted, but it appears to be clear that machine learning can provide utility.

So, what about astrology? Perhaps we should take it a bit more seriously .

If you enjoyed this article, you may also enjoy A Better Transportation Option Than A Tesla.

Read this article:
The Connection Between Astrology And Your Tesla AutoDrive - Forbes

Fiddler Labs, SRI and Berkeley experts open up the black box of machine learning at TC Sessions: Robotics+AI – TechCrunch

As AI permeates the home, work, and public life, its increasingly important to be able to understand why and how it makes its decisions. Explainable AI isnt just a matter of hitting a switch, though; Experts from UC Berkeley, SRI, and Fiddler Labs will discuss how we should go about it on stage at TC Sessions: Robotics+AI on March 3.

What does explainability really mean? Do we need to start from scratch? How do we avoid exposing proprietary data and methods? Will there be a performance hit? Whose responsibility will it be, and who will ensure it is done properly?

On our panel addressing these questions and more will be two experts, one each from academia and private industry.

Trevor Darrell is a professor at Berkeleys Computer Science department who helps lead many of the universitys AI-related labs and projects, especially those concerned with the next generation of smart transportation. His research group focuses on perception and human-AI interaction, and he previously led a computer vision group at MIT.

Krishna Gade has passed in his time through Facebook, Pinterest, Twitter and Microsoft, and has seen firsthand how AI is developed privately and how biases and flawed processes can lead to troubling results. He co-founded Fiddler as an effort to address problems of fairness and transparency by providing an explainable AI framework for enterprise.

Moderating and taking part in the discussion will be SRI Internationals Karen Myers, director of the research outfits Artificial Intelligence Center and an AI developer herself focused on collaboration, automation, and multi-agent systems.

Save $50 on tickets when you book today. Ticket prices go up at the door and are selling fast. We have two (yes two) Startup Demo Packages Left book your package now and get your startup in front of 1000+ of todays leading industry minds. Packages come with 4 tickets book here.

View post:
Fiddler Labs, SRI and Berkeley experts open up the black box of machine learning at TC Sessions: Robotics+AI - TechCrunch

iMerit Leads off 2020 With New AI Innovation Initiatives and Funding – GlobeNewswire

LOS GATOS, Calif., Feb. 18, 2020 (GLOBE NEWSWIRE) -- via NetworkWire --iMerit, a leading data annotation and enrichment company, is headed into2020 with expansion plans, new innovation and new funding for itshuman-in-the-loop AItechnology platform. The company has attracted $20 millionin Series B funding led by CDC Group, the UKsleading publicly-owned impact investor.This investment, which also includes participation fromexistinginvestors, will be used to continue innovation for the companys proprietary AIplatform that delivers 100% quality control and over 98% accuracy.

The funding will also be used to expand its advanced workforcefrom 3,000 employees across the US,India and Bhutan to 10,000 global employees by 2023. It is the latest signthat iMeritshigh-quality datasets for artificial intelligence (AI) and machine learningare leading the industry and achieving the highest security certification. Thecompanys dataannotation and enrichmentspecialists work across nine securecenters globally. They provide solutionsacross multiple markets including automotive, healthcare, e-commerce, finance,media and entertainment, and government. iMerit has been growing at over 100%for 3years, has been cash positive for the last 2 years, and is continuing todifferentiate from the rest of the market.

This investmentvalidates our belief that the growth in artificial intelligence and machinelearning is best serviced by a full-time, specialist workforce thatcontinuously learns and grows with the technology, saysiMerit CEO and founderRadha R. Basu, and CDC Group shares this belief. This new funding will enableiMerit to continue to provide enterprise-scale and quality to a large clientbase in a fast-growing andevolving market.Our investment in iMerit underlinesour commitment to back companies that are creating skilled jobs, particularlyfor women, in countries where they are most needed, says Nick ODonohoe, CEO, CDC Group.Advancesin AI technology are normally seen as a threat to jobs. iMerit has demonstratedthat the opposite is true. The technology sector has an incredibly importantrole to play in supporting the UNsSustainable Development Goals and in that regard iMerit is a true pioneer.

iMerits contributions toglobal AI initiatives in 2020 will include:

CDCsmission and iMeritsjourney align very well, says DD Ganguly, President of iMerit USA. Workingwith an organization, like CDC, that prioritizes an advanced, inclusive andgender-balanced workforce isperfect for iMerit. The collaboration will enableiMerit to continue to build a specialized, profitable, high growth business,with a customizable and agile technology platform, that will foster strongcustomer loyaltyin a cutting-edge sector.About iMeritiMerit's Artificial Intelligence and Machine Learning platformpowers advanced algorithms in Machine Learning, Computer Vision, NaturalLanguage Understanding, e-Commerce,Augmented Reality and Data Analytics. Itworks on data for transformative technologies such as advancing cancer cellresearch, optimizing crop yields and training driverless cars tounderstandtheir environment. The company drives social and economic change by tappinginto an under-resourced talent pool and creating digital inclusion. The teamconsists of3000 full-time staff, with more than 50% being women. The companys initial investorsareOmidyar Network, Michael and Susan Dell Foundation, and Khosla Impact. Formore information, visit:www.imerit.net.

About CDC GroupCDC Group is the worldsfirst impact investor with over 70 years of experience of successfullysupporting the sustainable, long-term growth of businesses in Africa and SouthAsia. CDC is a key advocate for the adoption of renewable energy inAfrica and South Africa in the fight against climate change and a UK championof the UNsSustainable Development Goals the global blueprint toachieve a better andmore sustainable future for us all.The company has investments in over 1,200 businesses in emergingeconomies and a total portfolio value of 5.8bn. This year CDC will invest over$1.5bn in companies in Africaand Asia with a focus on fighting climate change,empowering women and creating new jobs and opportunities for millions ofpeople.CDC is funded by the UK government and all proceeds from itsinvestments are reinvested to improve the lives of millions of people in Africaand South Asia.CDCsexpertise makes it the perfect partner for private investors looking to devotecapital to making a measurable environmental and social impact in countriesmost in need of investment.CDC provides flexible capital in all its forms, includingequity, debt, mezzanine and guarantees, to meet businessesneeds.It can invest across all sectors, butprioritizes those that help further development,such as infrastructure,financial institutions, manufacturing, and constructions. Find out more atwww.cdcgroup.com.

Media ContactAndreaHeuer at Consort PartnersSan Franciscoandreah@consortpartners.com

For further information please contactAndrew Murray-Watson123 Victoria Street, London, SW1E 6DEM. +44 (0) 7515 695232amurray-watson@cdcgroup.com

Read the original post:
iMerit Leads off 2020 With New AI Innovation Initiatives and Funding - GlobeNewswire

How Machine Learning Will Reshape The Future Of Investment Management – Forbes India

Image: ShutterstockThe 2020 outlook for Asset Management re-affirms impact of globalization and outperformance of private equity. While the developed worlds economy has sent mixed signals, all eyes are now on Asia and especially India, to drive the next phase of growth. The goal is to provide Investment Solutions for its mix of young as well as senior population. Its diversity cultural, economic, regional & regulatory, will pose the next challenge.

The application of Data Science & Machine Learning has delivered value for portfolio managers through quick and uniform decision-making. Strategic Beta Funds which have consistently generated added value, rely heavily on the robustness of their portfolio creation models which are excruciatingly data driven. Deploying Machine Learning algorithms helps assess credit worthiness of firms and individuals for lending and borrowing. Data Science and Machine Learning solutions eliminate human bias and calculation errors while evaluating investments in an optimum period.

Investment management is justified as an industry only to the extent that it can demonstrate a capacity to add value through the design of dedicated investor-centric investment solutions, as opposed to one-size-fits-all manager-centric investment products. After several decades of relative inertia, the much needed move towards investment solutions has been greatly facilitated by a true industrial revolution taking place in investment management, triggered by profound paradigm changes with the emergence of novel approaches such as factor investing, liability-driven and goal-based investing, as well as sustainable investing. Data science is expected to play an increasing role in these transformations.

This trend poses a critical challenge to global academic institutions: educating a new breed of young professionals and equipping them with the right skills to address the situation, and who could seize the fast-developing new job opportunities in this field. Continuous education gives the opportunity to meet with new challenges of this ever-changing world, especially in the investment industry.

As recently emphasized by our colleague Vijay Vaidyanathan, CEO, Optimal Asset Management, former EDHEC Business School PHD student, and online course instructor at EDHEC Business School, our financial well-being is second only to our physical well-being, and one of the key challenges we face is to enhance financial expertise. To achieve this, we cannot limit ourselves to the relatively small subset of the population who can afford to invest the significant time and expense of attending a formal, full-time degree programme on a university campus. Therefore, we must find ways to elevate the quality of financial professional financial education to ensure that all asset managers and asset owners are fully equipped to make intelligent and well-informed investment decisions.

Data science applied to asset management, and education in the field, is expected to affect not only investment professionals but also individuals. On this topic, we would like to share insights from Professor John Mulvey, Princeton University, who is also one of EDHEC on-line course instructors. John believes that machine learning applied to investment management is a real opportunity to assist individuals with their financial affairs in an integrated manner. Most people are faced with long-term critical decisions about saving, spending, and investing to achieve a wide variety of goals.

These decisions are often made without much professional guidance (except for wealthier clients), and without much technical training. Current personalized advisors are reasonable initial steps. Much more can be done in this area with modern data science and decision-making tools. Plus, younger people are more willing to trust fully automated computational systems. This domain is one of the most relevant and significant areas of development for future investment management.

By Nilesh Gaikwad, EDHEC Business School country manager in India, and Professor Lionel Martellini, EDHEC-Risk Institute Director.

Original post:
How Machine Learning Will Reshape The Future Of Investment Management - Forbes India

Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens – TechCrunch

The worlds of artificial intelligence and cybersecurity have become deeply entwined in recent years, as organizations work to keep up with and ideally block increasingly sophisticated malicious hackers. Today, a startup thats built a deep learning solution that it claims can both identify and stop even viruses that have yet to be identified has raised a large round of funding from some big strategic partners.

Deep Instinct, which uses deep learning both to learn how to identify and stop known viruses and other hacking techniques, as well as to be able to identify completely new approaches that have not been identified before, has raised $43 million in a Series C.

The funding is being led by Millennium New Horizons, with Unbound (a London-based investment firm founded by Shravin Mittal), LG and Nvidia all participating. The investment brings the total raised by Deep Instinct to $100 million, with HP and Samsung among its previous backers. The tech companies are all strategics, in that (as in the case of HP) they bundle and resell Deep Instincts solutions, or use them directly in their own services.

The Israeli-based company is not disclosing valuation, but notably, it is already profitable.

Targeting as-yet unknown viruses is becoming a more important priority as cybercrime grows. CEO and founder Guy Caspi notes that currently there are more than350,000 new machine-generated malware created every day with increasingly sophisticated evasion techniques, such as zero-days and APTs (Advanced Persistent Threats). Nearly two-thirds of enterprises have been compromised in the past year by new and unknown malware attacks originating at endpoints, representing a 20% increase from the previous year, he added. And zero-day attacks are now four times more likely to compromise organizations. Most cyber solutions on the market cant protect against these new types of attacks and have therefore shifted to a detect-response approach, he said, which by design means that they assume a breach will happen.

While there is already a large profusion of AI-based cybersecurity tools on the market today, Caspi notes that Deep Instinct takes a critically different approach because of its use of deep neural network algorithms, which essentially are set up to mimic how a human brain thinks.

Deep Instinct is the first and currently the only company to apply end-to-end deep learning to cybersecurity, he said in an interview. In his view, this provides a more advanced form of threat protection than the common traditional machine learning solutions available in the market, which rely on feature extractions determined by humans, which means they are limited by the knowledge and experience of the security expert, and can only analyze a very small part of the available data (less than 2%, he says). Therefore, traditional machine learning-based solutions and other forms of AI have low detection rates of new, unseen malware and generate high false-positive rates. Theres been a growing body of research that supports this idea, although weve not seen many deep learning cybersecurity solutions emerge as a result (not yet, anyway).

He adds that deep learning is the only AI-basedautonomous system that can learn from any raw data, as its not limited by an experts technological knowledge. In other words, its not based just on what a human inputs into the algorithm, but is based on huge swathes of big data, sourced from servers, mobile devices and other endpoints, that are input in and automatically read by the system.

This also means that the system can be used in turn across a number of different end points. Many machine learning-based cybersecurity solutions, he notes, are geared at Windows environments. That is somewhat logical, given that Windows and Android account for the vast majority of attacks these days, but cross-OS attacks are now on the rise.

While Deep Instinct specializes in preventing first-seen, unknown cyberattacks like APTs and zero-day attacks, Caspi notes that in the past year there has been a rise in both the amount and the impact of cyberattacks covering other areas. In 2019, Deep Instinct saw an increase in spyware and ransomware on top of an increase in the level of sophistication of the attacks that are being used, specifically with more file-less attacks using scripts and powershell, living off the land attacks and the use of weaponized documents like Microsoft Office files and PDFs. These sit alongside big malware attacks like Emotet, Trickbot, New ServeHelper and Legion Loader.

Today the company sells services both directly and via partners (like HP), and its mainly focused on enterprise users. But since there is very little in the way of technical implementation (Our solution is mostly autonomous and all processes are automated [and] deep learning brain is handling most of the security, Caspi said), the longer-term plan is to build a version of the product that consumers could adopt, too.

With a large part of antivirus software often proving futile in protecting users against attacks these days, that could come as a welcome addition to the market, despite how crowded it already is.

There is no shortage of cybersecurity software providers, yet no company aside from Deep Instinct has figured out how to apply deep learning to automate malware analysis, said Ray Cheng, partner at Millennium New Horizons, in a statement. What excites us most about Deep Instinct is its proven ability to use its proprietary neural network to effectively detect viruses and malware no other software can catch. That genuine protection in an age of escalating threats, without the need of exorbitantly expensive or complicated systems is a paradigm change.

Visit link:
Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens - TechCrunch

Machine Learning Market 2020 Booming by Size, Revenue, Trend and Top Companies 2026 – Instant Tech News

New Jersey, United States, The report titled, Machine Learning Market Size and Forecast 2026 in Verified Market Research offers its latest report on the global Machine Learning market that includes comprehensive analysis on a range of subjects like competition, segmentation, regional expansion, and market dynamics. The report sheds light on future trends, key opportunities, top regions, leading segments, the competitive landscape, and several other aspects of the Machine Learning market. Get access to crucial market information. Market players can use the report back to peep into the longer term of the worldwide Machine Learning market and convey important changes to their operating style and marketing tactics to realize sustained growth.

Global Machine Learning Market was valued at USD 2.03 Billion in 2018 and is projected to reach USD 37.43 Billion by 2026, growing at a CAGR of 43.9% from 2019 to 2026.

Get | Download Sample Copy @https://www.verifiedmarketresearch.com/download-sample/?rid=6487&utm_source=ITN&utm_medium=002

Top 10 Companies in the Global Machine Learning Market Research Report:

Global Machine Learning Market: Competitive Landscape

Competitive landscape of a market explains strategies incorporated by key players of the market. Key developments and shift in management in the recent years by players has been explained through company profiling. This helps readers to understand the trends that will accelerate the growth of market. It also includes investment strategies, marketing strategies, and product development plans adopted by major players of the market. The market forecast will help readers make better investments.

Global Machine Learning Market: Drivers and Restrains

This section of the report discusses various drivers and restrains that have shaped the global market. The detailed study of numerous drivers of the market enable readers to get a clear perspective of the market, which includes market environment, government policies, product innovations, breakthroughs, and market risks.

The research report also points out the myriad opportunities, challenges, and market barriers present in the Global Machine Learning Market. The comprehensive nature of the information will help the reader determine and plan strategies to benefit from. Restrains, challenges, and market barriers also help the reader to understand how the company can prevent itself from facing downfall.

Global Machine Learning Market: Segment Analysis

This section of the report includes segmentation such as application, product type, and end user. These segmentations aid in determining parts of market that will progress more than others. The segmentation analysis provides information about the key elements that are thriving the specific segments better than others. It helps readers to understand strategies to make sound investments. The Global Machine Learning Market is segmented on the basis of product type, applications, and its end users.

Global Machine Learning Market: Regional Analysis

This part of the report includes detailed information of the market in different regions. Each region offers different scope to the market as each region has different government policy and other factors. The regions included in the report are North America, South America, Europe, Asia Pacific, and the Middle East. Information about different region helps the reader to understand global market better.

Ask for Discount @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=6487&utm_source=ITN&utm_medium=002

Table of Content

1 Introduction of Machine Learning Market

1.1 Overview of the Market 1.2 Scope of Report 1.3 Assumptions

2 Executive Summary

3 Research Methodology of Verified Market Research

3.1 Data Mining 3.2 Validation 3.3 Primary Interviews 3.4 List of Data Sources

4 Machine Learning Market Outlook

4.1 Overview 4.2 Market Dynamics 4.2.1 Drivers 4.2.2 Restraints 4.2.3 Opportunities 4.3 Porters Five Force Model 4.4 Value Chain Analysis

5 Machine Learning Market, By Deployment Model

5.1 Overview

6 Machine Learning Market, By Solution

6.1 Overview

7 Machine Learning Market, By Vertical

7.1 Overview

8 Machine Learning Market, By Geography

8.1 Overview 8.2 North America 8.2.1 U.S. 8.2.2 Canada 8.2.3 Mexico 8.3 Europe 8.3.1 Germany 8.3.2 U.K. 8.3.3 France 8.3.4 Rest of Europe 8.4 Asia Pacific 8.4.1 China 8.4.2 Japan 8.4.3 India 8.4.4 Rest of Asia Pacific 8.5 Rest of the World 8.5.1 Latin America 8.5.2 Middle East

9 Machine Learning Market Competitive Landscape

9.1 Overview 9.2 Company Market Ranking 9.3 Key Development Strategies

10 Company Profiles

10.1.1 Overview 10.1.2 Financial Performance 10.1.3 Product Outlook 10.1.4 Key Developments

11 Appendix

11.1 Related Research

Request Customization of Report Complete Report is Available @ https://www.verifiedmarketresearch.com/product/global-machine-learning-market-size-and-forecast-to-2026/?utm_source=ITN&utm_medium=002

Highlights of Report

About Us:

Verified market research partners with clients to provide insight into strategic and growth analytics; data that help achieve business goals and targets. Our core values include trust, integrity, and authenticity for our clients.

Analysts with high expertise in data gathering and governance utilize industry techniques to collate and examine data at all stages. Our analysts are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research reports.

Contact Us:

Mr. Edwyne Fernandes Call: +1 (650) 781 4080 Email: [emailprotected]

TAGS: Machine Learning Market Size, Machine Learning Market Growth, Machine Learning Market Forecast, Machine Learning Market Analysis, Machine Learning Market Trends, Machine Learning Market

Read more from the original source:
Machine Learning Market 2020 Booming by Size, Revenue, Trend and Top Companies 2026 - Instant Tech News

Top Machine Learning Services in the Cloud – Datamation

Machine Learning services in the cloud are a critical area of the modern computing landscape, providing a way for organizations to better analyze data and derive new insights. Accessing these service via the cloud tends to be efficient in terms of cost and staff hours.

Machine Learning (often abbreviated as ML) is a subset of Artificial Intelligence (AI) and attempts to 'learn' from data sets in several different ways, including both supervised and unsupervised learning. There are many different technologies that can be used for machine learning, with a variety of commercial tools as well as open source framework.s

While organizations can choose to deploy machine learning frameworks on premises, it is typically a complex and resource intensive exercise. Machine Learning benefits from specialized hardware including inference chips and optimized GPUs. Machine Learning frameworks can also often be challenging to deploy and configure properly. Complexity has led to the rise of Machine Learning services in the cloud, that provide the right hardware and optimally configured software to that enable organizations to easily get started with Machine Learning.

There are several key features that are part of most machine learning cloud services.

AutoML - The automated Machine Learning feature automatically helps to build the right model.Machine Learning Studio - The studio concept is all about providing a developer environment where machine learning models and data modelling scenarios can be built.Open source framework support - The ability to support an existing framework such as TensorFlow, MXNet and Caffe is important as it helps to enable model portability.

When evaluating the different options for machine learning services in the cloud, consider the following criteria:

In this Datamation top companies list, we spotlight the vendors that offer the top machine learning services in the cloud.

Value proposition for potential buyers: Alibaba is a great option for users that have machine learning needs where data sets reside around the world and especially in Asia, where Alibaba is a leading cloud service.

Value proposition for potential buyers: Amazon Web Services has the broadest array of machine learning services in the cloud today, leading with its SageMaker portfolio that includes capabilities for building, training and deploying models in the cloud.

Value proposition for potential buyers: Google's set of Machine Learning services are also expansive and growing, with both generic as well as purpose built services for specific use-cases.

Value proposition for potential buyers: IBM Watson Machine learning enables users to run models on any cloud, or just on the the IBM Cloud

Value proposition for potential buyers: For organizations that have already bought into Microsoft Azure cloud, Azure Machine Learning is good fit, providing a cloud environment to train, deploy and manage machine learning models.

Value proposition for potential buyers: Oracle Machine learning is a useful tools for organizations already using Oracle Cloud applications, to help build data mining notebooks.

Value proposition for potential buyers: Salesforce Einstein is a purpose built machine learning platform that is tightly integrated with the Salesforce platform.

Read the original here:
Top Machine Learning Services in the Cloud - Datamation

In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak – Machine Learning Times – machine learning & data science news – The…

By: Casey Ross, National Technology Correspondent, StatNews.com

Surveillance data collected by healthmap.org show confirmed cases of the new coronavirus in China.

Artificial intelligence is not going to stop the new coronavirus or replace the role of expert epidemiologists. But for the first time in a global outbreak, it is becoming a useful tool in efforts to monitor and respond to the crisis, according to health data specialists.

In prior outbreaks, AI offered limited value, because of a shortage of data needed to provide updates quickly. But in recent days, millions of posts about coronavirus on social media and news sites are allowing algorithms to generate near-real-time information for public health officials tracking its spread.

The field has evolved dramatically, said John Brownstein, a computational epidemiologist at Boston Childrens Hospital who operates a public health surveillance site called healthmap.org that uses AI to analyze data from government reports, social media, news sites, and other sources.

During SARS, there was not a huge amount of information coming out of China, he said, referring to a 2003 outbreak of an earlier coronavirus that emerged from China, infecting more than 8,000 people and killing nearly 800. Now, were constantly mining news and social media.

Brownstein stressed that his AI is not meant to replace the information-gathering work of public health leaders, but to supplement their efforts by compiling and filtering information to help them make decisions in rapidly changing situations.

We use machine learning to scrape all the information, classify it, tag it, and filter it and then that information gets pushed to our colleagues at WHO that are looking at this information all day and making assessments, Brownstein said. There is still the challenge of parsing whether some of that information is meaningful or not.

These AI surveillance tools have been available in public health for more than a decade, but the recent advances in machine learning, combined with greater data availability, are making them much more powerful. They are also enabling uses that stretch beyond baseline surveillance, to help officials more accurately predict how far and how fast outbreaks will spread, and which types of people are most likely to be affected.

Machine learning is very good at identifying patterns in the data, such as risk factors that might identify zip codes or cohorts of people that are connected to the virus, said Don Woodlock, a vice president at InterSystems, a global vendor of electronic health records that is helping providers in China analyze data on coronavirus patients.

To continue reading this article click here.

Read the rest here:
In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak - Machine Learning Times - machine learning & data science news - The...

Optimising Utilisation Forecasting with AI and Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

What IT team wouldnt like to have a crystal ball that could predict the IT future, letting them fix application and infrastructure performance problems before they arise? Well, the current shortage ofcrystal balls makes the union of artificial intelligence (AI), machine learning (ML), and utilisation forecasting the next best thing for anticipating and avoiding issues that threaten the overall health and performance of all IT infrastructure components. The significance of AI has not been lost to organisations in the United Kingdom, with 43 per cent of them believing that AI will play a big role in their operations.

Utilisation forecasting is a technique that applies machine learning algorithms to produce daily usage forecasts for all utilisation volumes across CPUs, physical and virtual servers, disks, storage, bandwidth, and other network elements, enabling networking teams to manage resources proactively. This technique helps IT engineers and network admins prevent downtime caused by over-utilisation.

The AI/ML driving forecasting solution produces intelligent and reliable reports by taking advantage of the current availability of ample historic records and high-performance computing algorithms. Without AI/ML, utilisation forecasting relies on reactive monitoring. You set predefined thresholds for given metrics such as uptime, resource utilisation, network bandwidth, and hardware metrics like fan speed and device temperature. When a threshold is exceeded, an alert is issued. However, that reactive approach will not detect the anomalies that happen below that threshold and create other, indirect issues. Moreover, it will not tell you when you will need to upgrade your infrastructure based on current trends.

To forecast utilisation proactively, you need accurate algorithms that can analyze usage patterns and to detect anomalieswithout false positivesin daily usage trends. Thats how you predict usage in the future. Let us take a look at a simple use case.

SEE ALSO:

With proactive, AI/ML-driven utilisation forecasting, you can find a minor increase in your officebandwidth usage during the World Series, the FIFA World Cup, and other sporting events. Thatanomalous usage can be detected even if you have a huge amount of unused internet bandwidth. Similarly, proactive utilisation forecasting lets you know when to upgrade your infrastructure based on new recruitment and attrition rates.

A closer look at the predictive technologies reveals the fundamental difference between proactive and reactive forecasting. Without AI and ML, utilisation forecasting uses linear regression models to extrapolate and provide prediction based on existing data. This method involves no consideration of newly allocated memory or anomalies in utilisation patterns. Also, pattern recognition is a foreign concept. Although useful, linear regression models do not give IT admins complete visibility.

AI/ML-driven utilisation forecasting, on the other hand, uses the Seasonal and Trend decomposition using Loess (STL) method. STL lets you study the propagation and degradation of memory as well as analyze pattern matching whereby periodic changes in the metric configuration will be automatically adjusted. Bottom line, STL dramatically improves accuracy thanks to those dynamic, automated adjustments. And if any new memory is allocated, or if memory size is increased or decreased for the device, the prediction will change accordingly. This option was not possible with linear regression.

Beyond forecasting, ML can be used to improve anomaly detection. Here, adaptive thresholds for different metrics are established using ML and analysis of historic data will reveal any anomalies and trigger appropriate alerts. Other application and infrastructure monitoring functions will also be improved when enhanced with AI and ML technologies. Sometime in the not-too-distant future, AI/ML-driven forecasting and monitoring will rival the predictive powers of the fabled crystal ball.

by Rebecca D'Souza, Product Consultant, ManageEngine

The rest is here:
Optimising Utilisation Forecasting with AI and Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

Raleys Drive To Be Different Gets an Assist From Machine Learning – Winsight Grocery Business

Raleys has brought artificial intelligence to pricing not to necessarily to go toe-to-toe with competitors, but to differentiate from them, President and CEO Keith Knopf said.

Speaking in a presentation at the National Retail Federation show in New York, Knopf described how the West Sacramento, Calif.-based food retailer is using machine learning algorithms from partner Eversight to help manage its price perception amid larger, and often cheaper, competitorswhile optimizing revenue by driving unit share growth and margin dollars. That benefit is going toward what he described as a differentiated positioning behind health and wellness.

This is not just about pricing for the sake of pricing. This is pricing within a business strategy to differentiateand afford the investment in price in a way that is both financially sustainable and also relevant to the customer, Knopf said.

Raleyshas been working with Eversight for about four years, and has since invested in the Palo Alto, Calif.-based provider of AI-led pricing and promotion management. Knopf described using insights and recommendations derived from Eversights data crunching to support its merchants, helping to strategically manage the Rubiks Cube of pricing and promoting 40,000 items, each with varying elasticity, in stores with differing customer bases, price zones and competitive characteristics.

Raleys, Knopf said, is high-priced relative to its competitors, a reflection of its sizeand its ambitions. Were a $3 billion to $4 billion retailer competing against companies much larger than us, with much greater purchasing power and so for us, [AI pricing] is about optimization within our brand framework. We aspire to be a differentiated operator with a differentiated customer experience and a differentiated product assortment, which is guided more toward health and wellness. We have strong position in fresh that is evolving through innovation. But we also understand that we are a high-priced, high-cost retailer.

David Moran, Eversights co-founder, was careful to put his companys influence in perspective. Algorithms don't replace merchants or set a strategy, he said, but can support them by bringing new computing power that exceeds the work a merchant could do alone and has allowed for experimentation with pricing strategies across categories.In an example he shared, a mix of price changessome going up, others downhelped to drive overall unit growth and profits in the olive oil category.

The merchants still own the art: They are still the connection between the brand positioning, the price value perception, and they also own the execution, Knopf said. This technology gets us down that road much faster and with greater confidence.

Knopf said he believes that pricing science, in combination with customer relationship management, will eventually trigger big changes in the nature of promotional spending by vendors, with a shift toward so-called below the line programs, such as everyday pricing and personalized pricing, and less above the line mass promotions, which he believes are ultimately ineffective at driving long-term growth.

Every time we promote above the line, and everybody sees what everybody else does, no more units are sold in totality in the marketplace, it's just a matter of whos going to sell this week at what price, Knopf said.I believe that its in in the manufacturers best interest, and the retailers best interest, to make pricing personalized and relevant, and the dollars that are available today will shift from promotions into a more personalized, one-on-one, curated relationship that a vendor, the retailer and the customer will share.

Excerpt from:
Raleys Drive To Be Different Gets an Assist From Machine Learning - Winsight Grocery Business

Five Reasons to Go to Machine Learning Week 2020 – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

When deciding on a machine learning conference, why go to Machine Learning Week 2020? This five-conference event May 31 June 4, 2020 at Caesars Palace, Las Vegas delivers brand-name, cross-industry, vendor-neutral case studies purely on machine learnings commercial deployment, and the hottest topics and techniques. In this video, Predictive Analytics World Founder Eric Siegel spills on the details and lists five reasons this is the most valuable machine learning event to attend this year.

Note: This article is based on the transcript of a special episode of The Dr. Data Show click here to view.

In this article, I give five reasons that Machine Learning Week May 31 June 4, 2020 at Caesars Palace, Las Vegas is the most valuable machine learning event to attend this year. MLW is the largest annual five-conference blow-out part of the Predictive Analytics World conference series, of which I am the founder.

First, some background info. Your business needs machine learning to thrive and even just survive. You need it to compete, grow, improve, and optimize. Your team needs it, your boss demands it, and your career loves machine learning.

And so we bring you Predictive Analytics World, the leading cross-vendor conference series covering the commercial deployment of machine learning. By design, PAW is where to meet the whos who and keep up on the latest techniques.

This June in Vegas, Machine Learning Week brings together five different industry-focused events: PAW Business, PAW Financial, PAW Industry 4.0, PAW Healthcare, and Deep Learning World. This is five simultaneous two-day conferences all happening alongside one another at Caesars Palace in Vegas. Plus, a diverse range of full-day training workshops, which take place in the days just before and after.

Machine Learning Week delivers brand-name, cross-industry, vendor-neutral case studies purely on machine learning deployment, and the hottest topics and techniques.

This mega event covers all the bases for both senior-level expert practitioners as well as newcomers, project leaders, and executives. Depending on the topic, sessions and workshops are either demarcated as the Expert/practitioner level, or for All audiences. So, you can bring your team, your supervisor, and even the line-of-business managers you work with on model deployment. About 60-70% of attendees are on the hands-on practitioner side, but, as you know, successful machine learning deployment requires deep collaboration between both sides of the equation.

PAW and Deep Learning World also takes place in Germany, and Data Driven Government takes place in Washington DC but this article is about Machine Learning Week, so see predictiveanalyticsworld.com for details about the others.

Here are the five reasons to go.

Five Reasons to Go to Machine Learning Week June 2020 in Vegas

1) Brand-name case studies

Number one, youll access brand-name case studies. At PAW, youll hear directly from the horses mouth precisely how Fortune 500 analytics competitors and other companies of interest deploy machine learning and the kind of business results they achieve. More than most events, we pack the agenda as densely as possible with named case studies. Each day features a ton of leading in-house expert practitioners who get things done in the trenches at these enterprises and come to PAW to spill on the inside scoop. In addition, a smaller portion of the program features rock star consultants, who often present on work theyve done for one of their notable clients.

2) Cross-industry coverage

Number two, youll benefit from cross-industry coverage. As I mentioned, Machine Learning Week features these five industry-focused events. This amounts to a total of eight parallel tracks of sessions.

Bringing these all together at once fosters unique cross-industry sharing, and achieves a certain critical mass in expertise about methods that apply across industries. If your work spans industries, Machine Learning Week is one-stop shopping. Not to mention that convening the key industry figures across sectors greatly expands the networking potential.

The first of these, PAW Business, itself covers a great expanse of business application areas across many industries. Marketing and sales applications, of course. And many other applications in retail, telecommunications, e-commerce, non-profits, etc., etc.

The track topics of PAW Business 2020

PAW Business is a three-track event with track topics that include: analytics operationalization & management i.e., the business side core machine learning methods and advanced algorithms i.e., the technical side innovative business applications covered as case studies, and a lot more.

PAW Financial covers machine learning applications in banking including credit scoring insurance applications, fraud detection, algorithmic trading, innovative approaches to risk management, and more.

PAW Industry 4.0 and PAW Healthcare are also entire universes unto themselves. You can check out the details about all four of these PAWs at predictiveanalyticsworld.com.

And the newer sister event Deep Learning World has its own website, deeplearningworld.com. Deep learning is the hottest advanced form of machine learning with astonishing, proven value for large-signal input problems, such as image classification for self-driving cars, medical image processing, and speech recognition. These are fairly distinct domains, so Deep Learning World does well to complement the four Predictive Analytics World events.

3) Pure-play machine learning content

Number three, youll get pure-play machine learning content. PAWs agenda is not watered down with much coverage of other kinds of big data work. Instead, its ruthlessly focused specifically on the commercial application of machine learning also known as predictive analytics. The conference doesnt cover data science as a whole, which is a much broader and less well-defined area, that, for example, can include standard business intelligence reporting and such. And we dont cover AI per se. Artificial intelligence is at best a synonym for machine learning that tends to over-hype, or at worst an outright lie that promises mythological capabilities.

4) Hot new machine learning practices

Number four, youll learn the latest and greatest, the hottest new machine learning practices. Now, we launched PAW over a decade ago, so far delivering value to over 14,000 attendees across more than 60 events. To this day, PAW remains the leading commercial event because we keep up with the most valuable trends.

For example, Deep Learning World, which launched more recently in 2018 covers deep learnings commercial deployment across industry sectors. This relatively new form of neural networks has blossomed, both in buzz and in actual value. As I mentioned, it scales machine learning to process, for example, complex image data.

And what had been PAW Manufacturing for some years has now changed its name to PAW Industry 4.0. As such, the event now covers a broader area of inter-related work applying machine learning for smart manufacturing, the Internet of Things (IoT), predictive maintenance, logistics, fault prediction, and more.

In general, machine learning continues to widen its adoption and apply in new, innovative ways across sectors in marketing, financial risk, fraud detection, workforce optimization, and healthcare. PAW keeps up with these trends and covers todays best practices and the latest advanced modeling methods.

5) Vendor-neutral content

And finally, number five, youll access vendor-neutral content. PAW isnt run by an analytics vendor and the speakers arent trying to sell you on anything but good ideas. PAW speakers understand that vendor-neutral means those in attendance must be able to implement the practices covered and benefit from the insights delivered without buying any particular analytics product.

During the event, some vendors are permitted to deliver short presentations during a limited minority of demarcated sponsored sessions. These sessions often are also substantive and of great interest. In fact, you can access all the sponsors and tap into their expertise at will in the exhibit hall, where theyre set up for just that purpose.

By the way, if youre an analytics vendor yourself, check out PAWs various sponsorship opportunities. Our events bring together a great crowd of practitioners and decision makers.

Summary Five Reasons to Go

1) Brand-name case studies

2) Cross-industry coverage

3) Pure-play machine learning content

4) Hot new machine learning practices

5) Vendor-neutral content

and those are the reasons to come to Machine Learning Week: brand-name, cross-industry, vendor-neutral case studies purely on machine learnings commercial deployment, and the hottest topics and techniques.

Machine Learning Week not only delivers unique knowledge-gaining opportunities, its also a universal meeting place the industrys premier networking event. It brings together the whos who of machine learning and predictive analytics, the greatest diversity of expert speakers, perspectives, experience, viewpoints, and case studies.

This all turns the normal conference stuff into a much richer experience, including the keynotes, expert panels, and workshop days, as well as opportunities to network and talk shop during the lunches, coffee breaks, and reception.

I encourage you to check out the detailed agenda see all the speakers, case studies, and advanced methods covered. Each of the five conferences has its own agenda webpage, or you can also view the entire five-conference, eight-track mega-agenda at once. This view pertains if youre considering registering for the full Machine Learning Week pass, or if youll be attending along with other team members in order to divide and conquer.

Visit our website to see all these details, register, and sign up for informative event updates by email.

Or to learn more about the field in general, check out our Predictive Analytics Guide, our publication The Machine Learning Times, which includes revealing PAW speaker interviews, and, episodes of this show, The Dr. Data Show which, by the way, is generally about the field of machine learning in general, rather than about our PAW events.

This article is based on a transcript from The Dr. Data Show.

CLICK HERE TO VIEW THE FULL EPISODE

About the Dr. Data Show. This new web series breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics. Click here to view more episodes and to sign up for future episodes of The Dr. Data Show.

About the Author

Eric Siegel, Ph.D., founder of the Predictive Analytics Worldand Deep Learning World conference series and executive editor ofThe Machine Learning Times, makes the how and why of predictive analytics (aka machine learning) understandable and captivating. He is the author of the award-winning bookPredictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, the host of The Dr. Data Show web series, a former Columbia University professor, and a renowned speaker, educator, and leader in the field. Follow him at @predictanalytic.

Read more:
Five Reasons to Go to Machine Learning Week 2020 - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Don’t want a robot stealing your job? Take a course on AI and machine learning. – Mashable

Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission.There are some 288 lessons included in this online training course.

Image: pexels

By StackCommerceMashable Shopping2020-01-16 19:44:17 UTC

TL;DR: Jump into the world of AI with the Essential AI and Machine Learning Certification Training Bundle for $39.99, a 93% savings.

From facial recognition to self-driving vehicles, machine learning is taking over modern life as we know it. It may not be the flying cars and world-dominating robots we envisioned 2020 would hold, but it's still pretty futuristic and frightening. The good news is if you're one of the pros making these smart systems and machines, you're in good shape. And you can get your foot in the door by learning the basics with this Essential AI and Machine Learning Certification Training Bundle.

This training bundle provides four comprehensive courses introducing you to the world of artificial intelligence and machine learning. And right now, you can get the entire thing for just $39.99.

These courses cover natural language processing, computer vision, data visualization, and artificial intelligence basics, and will ultimately teach you to build machines that learn as they're fed human input. Through hands-on case studies, practice modules, and real-time projects, you'll delve into the world of intelligent systems and machines and get ahead of the robot revolution.

Here's what you can expect from each course:

Access 72 lectures and six hours of content exploring topics like convolutional neural networks (CNNs), recurrent neural networks (RNNs), and other deep architectures using TensorFlow. Ultimately, you'll build a foundation in both artificial intelligence, which is the concept in which machines develop the ability to simulate natural intelligence to carry out tasks, and machine learning, which is an application of AI aiming to learn from data and build on it to maximize performance.

Through seven hours of content, you'll learn how to arrange critical data in a visual format think graphs, charts, and pictograms. You'll also learn to deploy data visualization through Python using Matplotlib, a library that helps in viewing the data. Finally, you'll tackle actual geographical plotting using the Matplotlib extension called Basemap.

In just 5.5 hours, this course gives you a more in-depth look at the role of CNNs, the knowledge of transfer learning, object localization, object detection, and using TensorFlow. You'll also learn the challenges of working with real-world data and how to tackle them head-on.

Natural language processing (NLP) is a field of AI which allows machines to interpret and comprehend human language. Through 5.5 hours of content, you'll understand the processes involved in this field and learn how to build artificial intelligence for automation. The course itself provides an innovative methodology and sample exercises to help you dive deep into NLP.

Originally $656, you can slash 93% off and get a year's worth of access to the Essential AI and Machine Learning Bundle for just $39.99 right now.

Prices subject to change.

Read the original:
Don't want a robot stealing your job? Take a course on AI and machine learning. - Mashable

Going Beyond Machine Learning To Machine Reasoning – Forbes

From Machine Learning to Machine Reasoning

The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation. However, the history and evolution of AI is more than just a technology story. The story of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks. There seems to be a continuous pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realization of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat seem to be as consistent as the back and forth of sea waves on the shore.

This pattern of interest, investment, hype, then decline, and rinse-and-repeat is particularly vexing to technologists and investors because it doesn't follow the usual technology adoption lifecycle. Popularized by Geoffrey Moore in his book "Crossing the Chasm", technology adoption usually follows a well-defined path. Technology is developed and finds early interest by innovators, and then early adopters, and if the technology can make the leap across the "chasm", it gets adopted by the early majority market and then it's off to the races with demand by the late majority and finally technology laggards. If the technology can't cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn't fit the technology adoption lifecycle pattern.

But AI isn't a discrete technology. Rather it's a series of technologies, concepts, and approaches all aligning towards the quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology. AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren't investing in "AI, but rather they're investing in the output of AI research and technologies that can help achieve the goals of AI. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.

The Need for Understanding

It's clear that intelligence is like an onion (or a parfait) many layers. Once we understand one layer, we find that it only explains a limited amount of what intelligence is about. We discover there's another layer thats not quite understood, and back to our research institutions we go to figure out how it works. In Cognilyticas exploration of the intelligence of voice assistants, the benchmark aims to tease at one of those next layers: understanding. That is, knowing what something is recognizing an image among a category of trained concepts, converting audio waveforms into words, identifying patterns among a collection of data, or even playing games at advanced levels, is different from actually understanding what those things are. This lack of understanding is why users get hilarious responses from voice assistant questions, and is also why we can't truly get autonomous machine capabilities in a wide range of situations. Without understanding, there's no common sense. Without common sense and understanding, machine learning is just a bunch of learned patterns that can't adapt to the constantly evolving changes of the real world.

One of the visual concepts thats helpful to understand these layers of increasing value is the "DIKUW Pyramid":

DIKUW Pyramid

While the Wikipedia entry above conveniently skips the Understanding step in their entry, we believe that understanding is the next logical threshold of AI capability. And like all previous layers of this AI onion, tackling this layer will require new research breakthroughs, dramatic increases in compute capabilities, and volumes of data. What? Don't we have almost limitless data and boundless computing power? Not quite. Read on.

The Quest for Common Sense: Machine Reasoning

Early in the development of artificial intelligence, researchers realized that for machines to successfully navigate the real world, they would have to gain an understanding of how the world works and how various different things are related to each other. In 1984, the world's longest-lived AI project started. The Cyc project is focused on generating a comprehensive "ontology" and knowledge base of common sense, basic concepts and "rules of thumb" about how the world works. The Cyc ontology uses a knowledge graph to structure how different concepts are related to each other, and an inference engine that allows systems to reason about facts.

The main idea behind Cyc and other understanding-building knowledge encodings is the realization that systems can't be truly intelligent if they don't understand what the underlying things they are recognizing or classifying are. This means we have to dig deeper than machine learning for intelligence. We need to peel this onion one level deeper, scoop out another tasty parfait layer. We need more than machine learning - we need machine reasoning.

Machine reason is the concept of giving machines the power to make connections between facts, observations, and all the magical things that we can train machines to do with machine learning. Machine learning has enabled a wide range of capabilities and functionality and opened up a world of possibility that was not possible without the ability to train machines to identify and recognize patterns in data. However, this power is crippled by the fact that these systems are not really able to functionally use that information for higher ends, or apply learning from one domain to another without human involvement. Even transfer learning is limited in application.

Indeed, we're rapidly facing the reality that we're going to soon hit the wall on the current edge of capabilities with machine learning-focused AI. To get to that next level we need to break through this wall and shift from machine learning-centric AI to machine reasoning-centric AI. However, that's going to require some breakthroughs in research that we haven't realized yet.

The fact that the Cyc project has the distinction as being the longest-lived AI project is a bit of a back-handed compliment. The Cyc project is long lived because after all these decades the quest for common sense knowledge is proving elusive. Codifying commonsense into a machine-processable form is a tremendous challenge. Not only do you need to encode the entities themselves in a way that a machine knows what you're talking about but also all the inter-relationships between those entities. There are millions, if not billions, of "things" that a machine needs to know. Some of these things are tangible like "rain" but others are intangible such as "thirst". The work of encoding these relationships is being partially automated, but still requires humans to verify the accuracy of the connections... because after all, if machines could do this we would have solved the machine recognition challenge. It's a bit of a chicken and egg problem this way. You can't solve machine recognition without having some way to codify the relationships between information. But you can't scalable codify all the relationships that machines would need to know without some form of automation.

Are we still limited by data and compute power?

Machine learning has proven to be very data-hungry and compute-intensive. Over the past decade, many iterative enhancements have lessened compute load and helped to make data use more efficient. GPUs, TPUs, and emerging FPGAs are helping to provide the raw compute horsepower needed. Yet, despite these advancements, complicated machine learning models with lots of dimensions and parameters still require intense amounts of compute and data. Machine reasoning is easily one order or more of complexity beyond machine learning. Accomplishing the task of reasoning out the complicated relationships between things and truly understanding these things might be beyond today's compute and data resources.

The current wave of interest and investment in AI doesn't show any signs of slowing or stopping any time soon, but it's inevitable it will slow at some point for one simple reason: we still don't understand intelligence and how it works. Despite the amazing work of researchers and technologists, we're still guessing in the dark about the mysterious nature of cognition, intelligence, and consciousness. At some point we will be faced with the limitations of our assumptions and implementations and we'll work to peel the onion one more layer and tackle the next set of challenges. Machine reasoning is quickly approaching as the next challenge we must surmount on the quest for artificial intelligence. If we can apply our research and investment talent to tackling this next layer, we can keep the momentum going with AI research and investment. If not, the pattern of AI will repeat itself, and the current wave will crest. It might not be now or even within the next few years, but the ebb and flow of AI is as inevitable as the waves upon the shore.

See the article here:
Going Beyond Machine Learning To Machine Reasoning - Forbes

3 Cheap Machine Learning Stocks That Smart Investors Will Snap Up Now – InvestorPlace

Source: shutterstock.com/cono0430

Machine learning stocks represent publicly traded firms specializing in a subfield of artificial intelligence (AI). The terms AI and machine learning have become synonymous, but machine learning is really about making machines imitate intelligent human behavior. Semantics aside, machine learning and AI have come to the forefront in 2023.

Generative AI has boomed this year, and the race is on to identify the next must-buy shares in the sector. The firms identified in this article arent cheap in an absolute sense. Their price can be quite high. However, they are expected to provide strong returns, making them a bargain for investors currently and cheap in a relative sense.

Source: Sundry Photography / Shutterstock.com

Lets begin our discussion of machine learning stocks with ServiceNow (NYSE:NOW). The firm offers a cloud computing platform utilizing machine learning to help firms manage their workflows. Enterprise AI is a burgeoning field that will only continue to grow as firms integrate machine learning into their workflows.

As mentioned in the introduction, ServiceNow is not cheap in an absolute sense. At $563 a share, there are a lot of other equities that investors could buy for much cheaper. However, Wall Street expects ServiceNow to move past $600 and perhaps $700. The metrics-oriented website Gurufocus believes ServiceNows potential returns are even higher and peg its value at $790.

The firms Q2 earnings report, released July 26, gives investors a lot of reason to believe that share prices should continue to rise. The firm exceeded revenue growth and profitability guidance during the period, which allowed management the confidence to raise subscription revenue and margin guidance for the year.

Q2 subscription revenue reached $2.075 billion, up 25% year-over-year (YOY). Total revenues reached $2.150 million in the quarter.

Source: Pamela Marciano / Shutterstock.com

AMD (NASDAQ:AMD) and its stock continued to be overshadowed by its main rival, Nvidia (NASDAQ:NVDA). The former has almost doubled in 2023, while the latter has more than tripled. Its basically become accepted that AMD is far behind its competition in all things AI and machine learning. However, the news is mixed, making AMD particularly interesting as Nvidia shares are continually scrutinized for their price levels.

An article from early 2023 noted that the comparison between AMD and Nvidia isnt unfair. It concluded that Nvidia is better all around. However, that article also touched on the notion that AMD could potentially optimize its cards through software capabilities inherent to the firm.

That was the same conclusion MosaicML came to when testing the two firms head-to-head several months later. AMD isnt very far behind Nvidia, after all, and it has a chance to make up ground via its software prowess. Thats exactly why investors should consider AMD currently, given its relatively cheaper price.

Source: T. Schneider / Shutterstock.com

CrowdStrike (NASDAQ:CRWD) operates in a combination of growing fields. The stock represents cybersecurity and machine learning directed toward identifying IT threats. It provides endpoint security and was recently awarded its second consecutive annual honor as the best at the SC Awards Europe 2023. The company is well-regarded in its industry and is growing very quickly.

The entity also has strong fundamentals. In Q1, revenues increased by 61% YOY, reaching $487.8 million. CrowdStrikes net income loss narrowed from $85 million to $31.5 million during the period YOY. The firm generated $215 million in cash flow, leaving a lot of room to maneuver overall.

Furthermore, CrowdStrike announced it is partnering with Amazon (NASDAQ:AMZN) to work with AWS on generative AI applications to increase security. CrowdStrike is arguably the best endpoint security stock available overall, and its strong inroads into AI and machine learning have set it up for even greater growth moving forward.

On the date of publication, Alex Sirois did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

View post:

3 Cheap Machine Learning Stocks That Smart Investors Will Snap Up Now - InvestorPlace

Tim Cook says AI, machine learning are part of virtually every product Apple is building – CryptoSlate

What is CryptoSlate Alpha?

A web3 membership designed to empower you with cutting-edge insights and knowledge. Learn more

Welcome! You are connected to CryptoSlate Alpha. To manage your wallet connection, click the button below.

If you don't have enough, buy ACS on the following exchanges:

Access Protocol is a web3 monetization paywall. When users stake ACS, they can access paywalled content. Learn more

Disclaimer: By choosing to lock your ACS tokens with CryptoSlate, you accept and recognize that you will be bound by the terms and conditions of your third-party digital wallet provider, as well as any applicable terms and conditions of the Access Foundation. CryptoSlate shall have no responsibility or liability with regard to the provision, access, use, locking, security, integrity, value, or legal status of your ACS Tokens or your digital wallet, including any losses associated with your ACS tokens. It is solely your responsibility to assume the risks associated with locking your ACS tokens with CryptoSlate. For more information, visit our terms page.

Original post:

Tim Cook says AI, machine learning are part of virtually every product Apple is building - CryptoSlate

AI GNNs: Transforming the Landscape of Machine Learning – Fagen wasanni

Unveiling the Power of AI GNNs: Transforming the Landscape of Machine Learning

Artificial Intelligence (AI) continues to redefine the boundaries of what is possible in the realm of technology, and its latest offering, Graph Neural Networks (GNNs), is set to transform the landscape of machine learning. GNNs are a novel and powerful tool that allows AI to understand and interpret data in ways that were previously unimaginable, opening up a world of possibilities for machine learning applications.

GNNs are a type of neural network designed to work specifically with graph data structures, which are mathematical models that represent relationships between objects. Traditional neural networks struggle to handle this type of data, as they are primarily designed to work with grid-like data structures. However, GNNs are uniquely equipped to handle graph data, enabling them to capture complex relationships and patterns that would otherwise go unnoticed.

The transformative power of GNNs lies in their ability to process and interpret complex, non-Euclidean data. This means they can handle data that does not fit neatly into a grid, such as social networks, molecular structures, or transportation networks. This capability opens up a new frontier in machine learning, allowing AI to tackle problems and analyze data in ways that were previously out of reach.

For instance, in the field of social network analysis, GNNs can identify influential individuals within a network, detect communities, and predict future interactions. In the realm of bioinformatics, GNNs can be used to predict the properties of molecules based on their structure, a task that could have significant implications for drug discovery. In transportation, GNNs can optimize routes and schedules, leading to more efficient and sustainable systems.

The application of GNNs extends beyond these examples. In fact, any field that deals with complex, interconnected data can potentially benefit from the power of GNNs. This versatility is one of the reasons why GNNs are being hailed as a game-changer in the world of machine learning.

However, as with any new technology, there are challenges to overcome. Training GNNs requires a significant amount of computational power and can be time-consuming. There are also questions about how to best design and optimize GNNs for specific tasks. Despite these challenges, the potential benefits of GNNs are immense, and researchers are actively working to address these issues.

The introduction of GNNs represents a significant step forward in the field of AI. By enabling machines to understand and interpret complex, interconnected data, GNNs are opening up new possibilities for machine learning applications. As researchers continue to refine and develop this technology, we can expect to see GNNs playing an increasingly important role in a wide range of fields, from social network analysis to bioinformatics, transportation, and beyond.

In conclusion, the advent of AI GNNs is transforming the landscape of machine learning. Their ability to handle complex, non-Euclidean data is unlocking new possibilities and applications, making them a powerful tool in the AI toolkit. As we continue to explore and harness the potential of GNNs, the future of machine learning looks more promising than ever.

Go here to read the rest:

AI GNNs: Transforming the Landscape of Machine Learning - Fagen wasanni

Machine-learning for the prediction of one-year seizure recurrence … – Nature.com

Fisher, R. S. et al. ILAE official report: A practical clinical definition of epilepsy. Epilepsia 55, 475482 (2014).

Article PubMed Google Scholar

Tatum, W. O. et al. Clinical utility of EEG in diagnosing and monitoring epilepsy in adults. Clin. Neurophysiol. 129, 10561082 (2018).

Article CAS PubMed Google Scholar

Pillai, J. & Sperling, M. R. Interictal EEG and the diagnosis of epilepsy. Epilepsia 47, 1422 (2006).

Article PubMed Google Scholar

Baldin, E., Hauser, W. A., Buchhalter, J. R., Hesdorffer, D. C. & Ottman, R. Yield of epileptiform electroencephalogram abnormalities in incident unprovoked seizures: A population-based study. Epilepsia 55, 13891398 (2014).

Article PubMed PubMed Central Google Scholar

Bouma, H. K., Labos, C., Gore, G. C., Wolfson, C. & Keezer, M. R. The diagnostic accuracy of routine electroencephalography after a first unprovoked seizure. Eur. J. Neurol. 23, 455463 (2016).

Article CAS PubMed Google Scholar

Jing, J. et al. Interrater reliability of experts in identifying interictal epileptiform discharges in electroencephalograms. JAMA Neurol. 77, 4957 (2020).

Article PubMed Google Scholar

Amin, U. & Benbadis, S. R. The role of EEG in the erroneous diagnosis of epilepsy. J. Clin. Neurophysiol. 36, 294297 (2019).

Article PubMed Google Scholar

Chadwick, D. & Smith, D. The misdiagnosis of epilepsy. BMJ 324, 495496 (2002).

Article PubMed PubMed Central Google Scholar

Seneviratne, U., Cook, M. & DSouza, W. The electroencephalogram of idiopathic generalized epilepsy. Epilepsia 53, 234248 (2012).

Article PubMed Google Scholar

Seneviratne, U., Boston, R. C., Cook, M. & DSouza, W. EEG correlates of seizure freedom in genetic generalized epilepsies. Neurol. Clin. Pract. 7, 3544 (2017).

Article PubMed PubMed Central Google Scholar

Guida, M., Iudice, A., Bonanni, E. & Giorgi, F. S. Effects of antiepileptic drugs on interictal epileptiform discharges in focal epilepsies: An update on current evidence. Expert Rev. Neurother. 15, 947959 (2015).

Article CAS PubMed Google Scholar

Arntsen, V., Sand, T., Syvertsen, M. R. & Brodtkorb, E. Prolonged epileptiform EEG runs are associated with persistent seizures in juvenile myoclonic epilepsy. Epilepsy Res. 134, 2632 (2017).

Article PubMed Google Scholar

Acharya, U. R., Vinitha Sree, S., Swapna, G., Martis, R. J. & Suri, J. S. Automated EEG analysis of epilepsy: A review. Knowl.-Based Syst. 45, 147165 (2013).

Article Google Scholar

Woldman, W. et al. Dynamic network properties of the interictal brain determine whether seizures appear focal or generalised. Sci. Rep. 10, 7043 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Chowdhury, F. A. et al. Revealing a brain network endophenotype in families with idiopathic generalised epilepsy. PLoS ONE 9, e110136 (2014).

Article ADS PubMed PubMed Central Google Scholar

Varatharajah, Y. et al. Quantitative analysis of visually reviewed normal scalp EEG predicts seizure freedom following anterior temporal lobectomy. Epilepsia 63, 16301642 (2022).

Article PubMed PubMed Central Google Scholar

Abela, E. et al. Slower alpha rhythm associates with poorer seizure control in epilepsy. Ann. Clin. Transl. Neurol. 6(2), 333343 (2019).

Article PubMed Google Scholar

Larsson, P. G. & Kostov, H. Lower frequency variability in the alpha activity in EEG among patients with epilepsy. Clin. Neurophysiol. 116, 27012706 (2005).

Article PubMed Google Scholar

Pegg, E. J., Taylor, J. R. & Mohanraj, R. Spectral power of interictal EEG in the diagnosis and prognosis of idiopathic generalized epilepsies. Epilepsy Behav. 112, 107427 (2020).

Article PubMed Google Scholar

Larsson, P. G., Eeg-Olofsson, O. & Lantz, G. Alpha frequency estimation in patients with epilepsy. Clin. EEG Neurosci. 43(2), 97104 (2012).

Article PubMed Google Scholar

Miyauchi, T., Endo, K., Yamaguchi, T. & Hagimoto, H. Computerized analysis of EEG background activity in epileptic patients. Epilepsia 32, 870881 (1991).

Article CAS PubMed Google Scholar

Diaz, G. F. et al. Generalized background qEEG abnormalities in localized symptomatic epilepsy. Electroencephalogr. Clin. Neurophysiol. 106(6), 501507 (1998).

Article CAS PubMed Google Scholar

Urigen, J. A., Garca-Zapirain, B., Artieda, J., Iriarte, J. & Valencia, M. Comparison of background EEG activity of different groups of patients with idiopathic epilepsy using Shannon spectral entropy and cluster-based permutation statistical testing. PLoS ONE 12, e0184044 (2017).

Article PubMed PubMed Central Google Scholar

Sathyanarayana, A. et al. Measuring the effects of sleep on epileptogenicity with multifrequency entropy. Clin. Neurophysiol. 132, 20122018 (2021).

Article PubMed PubMed Central Google Scholar

Luo, K. & Luo, D. An EEG feature-based diagnosis model for epilepsy. in 2010 International Conference on Computer Application and System Modeling (ICCASM 2010) vol. 8 V8592-V8594 (2010).

Faiman, I., Smith, S., Hodsoll, J., Young, A. H. & Shotbolt, P. Resting-state EEG for the diagnosis of idiopathic epilepsy and psychogenic nonepileptic seizures: A systematic review. Epilepsy Behav. 121, 108047 (2021).

Article PubMed Google Scholar

Engel, J. Jr., Bragin, A. & Staba, R. Nonictal EEG biomarkers for diagnosis and treatment. Epilepsia Open 3, 120126 (2018).

Article PubMed PubMed Central Google Scholar

Dash, D. et al. Update on minimal standards for electroencephalography in Canada: A review by the Canadian Society of Clinical Neurophysiologists. Can. J. Neurol. Sci./J. Can. des Sci. Neurologiques 44, 631642 (2017).

Article Google Scholar

Jas, M., Engemann, D. A., Bekhti, Y., Raimondo, F. & Gramfort, A. Autoreject: Automated artifact rejection for MEG and EEG data. Neuroimage 159, 417429 (2017).

Article PubMed Google Scholar

Gandhi, T., Panigrahi, B. K. & Anand, S. A comparative study of wavelet families for EEG signal classification. Neurocomputing 74, 30513057 (2011).

Article Google Scholar

Zou, H. & Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B 67, 301320 (2005).

Article MathSciNet MATH Google Scholar

Ke, G. et al. LightGBM: A highly efficient gradient boosting decision tree. In Proceedings of the 31st International Conference on Neural Information Processing Systems (ed. Ke, G.) 31493157 (Curran Associates Inc, 2017).

Google Scholar

Cawley, G. C. & Talbot, N. L. C. On over-fitting in model selection and subsequent selection bias in performance evaluation. J. Mach. Learn. Res. 11, 20792107 (2010).

MathSciNet MATH Google Scholar

LeDell, E., Petersen, M. & van der Laan, M. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates. Electron. J. Stat. 9, 15831607 (2015).

Article MathSciNet PubMed PubMed Central MATH Google Scholar

DeLong, E. R., DeLong, D. M. & Clarke-Pearson, D. L. Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics 44, 837845 (1988).

Article CAS PubMed MATH Google Scholar

Collins, G. S., Reitsma, J. B., Altman, D. G. & Moons, K. G. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement. Ann. Intern. Med. https://doi.org/10.7326/M14-0697 (2015).

Article PubMed Google Scholar

Clarke, S. et al. Computer-assisted EEG diagnostic review for idiopathic generalized epilepsy. Epilepsy Behav. 121, 106556. https://doi.org/10.1016/j.yebeh.2019.106556 (2019).

Article PubMed Google Scholar

Drake, M. E., Padamadan, H. & Newell, S. A. Interictal quantitative EEG in epilepsy. Seizure Eur. J. Epilepsy 7, 3942 (1998).

Article CAS Google Scholar

Mammone, N. & Morabito, F. C. Analysis of absence seizure EEG via Permutation Entropy spatio-temporal clustering. Int. Jt. Conf. Neural Netw. https://doi.org/10.1109/ijcnn.2011.6033390 (2011).

Article Google Scholar

Lijmer, J. G. et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 282, 10611066 (1999).

Article CAS PubMed Google Scholar

Pepe, M. S., Feng, Z., Janes, H., Bossuyt, P. M. & Potter, J. D. Pivotal evaluation of the accuracy of a biomarker used for classification or prediction: Standards for study design. J. Natl. Cancer Inst. 100, 14321438 (2008).

Article CAS PubMed PubMed Central Google Scholar

Zelig, D. et al. Paroxysmal slow wave events predict epilepsy following a first seizure. Epilepsia 63, 190198 (2022).

Article PubMed Google Scholar

Douw, L. et al. Functional connectivity is a sensitive predictor of epilepsy diagnosis after the first seizure. PLoS ONE 5, e10839 (2010).

Article ADS PubMed PubMed Central Google Scholar

Futoma, J., Simons, M., Panch, T., Doshi-Velez, F. & Celi, L. A. The myth of generalisability in clinical research and machine learning in health care. Lancet Digital Health 2, e489e492 (2020).

Article PubMed Google Scholar

Krumholz, A. et al. Evidence-based guideline: Management of an unprovoked first seizure in adults. Neurology 84, 1705 (2015).

Article PubMed PubMed Central Google Scholar

Gloss, D. et al. Antiseizure medication withdrawal in seizure-free patients: Practice advisory update summary: Report of the AAN guideline subcommittee. Neurology 97, 10721081 (2021).

Article PubMed Google Scholar

Selvitelli, M. F., Walker, L. M., Schomer, D. L. & Chang, B. S. The relationship of interictal epileptiform discharges to clinical epilepsy severity: A study of routine electroencephalograms and review of the literature. J. Clin. Neurophysiol. 27, 8792 (2010).

Article PubMed PubMed Central Google Scholar

Chu, C., Hsu, A.-L., Chou, K.-H., Bandettini, P. & Lin, C. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images. Neuroimage 60, 5970 (2012).

Article PubMed Google Scholar

Jollans, L. et al. Quantifying performance of machine learning methods for neuroimaging data. Neuroimage 199, 351365 (2019).

Article PubMed Google Scholar

Fisher, R. S. Bad information in epilepsy care. Epilepsy Behav. 67, 133134 (2017).

Article PubMed Google Scholar

Buchhalter, J. et al. EEG parameters as endpoints in epilepsy clinical trialsAn expert panel opinion paper. Epilepsy Res. 187, 107028 (2022).

Read more here:

Machine-learning for the prediction of one-year seizure recurrence ... - Nature.com