Two Years into the Government’s National Quantum Initiative – Nextgov

Monday markedtwo years since the passage of the National Quantum Initiative, or NQI Actand in that time, federal agencies followed through on its early calls and helped lay the groundwork for new breakthroughs across the U.S. quantum realm.

Now, the sights of those helping implement the law are set on the future.

I would say in five years, something we'd love to see is ... a better idea of, What are the applications for a quantum computer thats buildable in the next fiveto 10 years, that would be beneficial to society? the Office of Science and Technology Policy Assistant Director for Quantum Information Science Dr. Charles Tahan told Nextgov in an interview Friday. He also serves as the director of the National Quantum Coordination Officea cooperation-pushing hub established by the legislation.

Tahan reflected on some foundational moves made over the last 24 months and offered a glimpse into his teams big-ticket priorities for 2021.

Quantum devices and technologies are among an ever-evolving field that hones in on phenomena at the atomic scale. Potential applications are coming to light, and are expected to radically reshape science, engineering, computing, networking, sensing, communication and more. They offer promises like unhackable internet or navigation support in places disconnected from GPS.

Federal agencies have a long history of exploring physical sciences and quantum-related pursuitsbut previous efforts were often siloed. Signed by President Donald Trump in 2018, the NQI Act sought to provide for a coordinated federal program to accelerate quantum research and development for the economic and national security of America. It assigned specific jobs for the National Institute of Standards and Technology, Energy Department and National Science Foundation, among others, and mandated new collaborations to boost the nations quantum workforce talent pipeline and strengthen societys grasp of this relatively fresh area of investment. The functions of the National Quantum Coordination Office, or NQCO, were also set forth in the bill, and it was officially instituted in early 2019. Since then, the group has helped connect an array of relevant stakeholders and facilitate new initiatives proposed by the law.

Now, everything that's been called out in the act has been establishedits started up, Tahan explained. He noted the three agencies with weighty responsibilities spent 2019 planning out their courses of action within their communities, and this year, subsequently launched weighty new efforts.

One of the latest was unveiled in August by the Energy Department, which awarded $625 million over five yearssubject to appropriationsto its Argonne, Brookhaven, Fermi, Oak Ridge and Lawrence Berkeley national laboratories to establish QIS Research Centers. In each, top thinkers will link up to push forward collaborative research spanning many disciplines. Academic and private-sector institutions also pledged to provide $340 million in contributions for the work.

These are about $25 million eachthat's a tremendous amount of students, and postdocs, and researchers, Tahan said. And those are spread out across the country, focusing on all different areas of quantum: computing, sensing and networking.

NSF this summer also revealed the formation of new Quantum Leap Challenge Institutes to tackle fundamental research hurdles in quantum information science and engineering over the next half-decade. The University of Colorado, University of Illinois-Urbana-Champaign, and University of California, Berkeley are set to head and house the first three institutes, though Tahan confirmed more could be launched next year. The initiative is backed by $75 million in federal fundingand while it will take advantage of existing infrastructures, non-governmental entities involved are also making their own investments and constructing new facilities.

That's the foundation, you know, Tahan said. The teams have been formed, the research plans have been writtenthat's a tremendous amount of workand now they're off actually working. So now, we start to reap the rewards because all the heavy lifting of getting people organized has been done.

Together with NSF, OSTP also helped set in motion the National Q-12 Education Partnership. It intends to connect public, private and academic sector quantum players and cohesively create and release learning materials to help U.S. educators produce new courses to engage students with quantum fields. The work is ultimately meant to spur K-12 students' interest in the emerging areas earlier into their education, and NSF will award nearly $1 million across QIS education efforts through the work.

And beyond the governments walls and those of academia, the NQI Act also presented new opportunities for industry. Meeting the laws requirements, NIST helped convene a consortium of cross-sector stakeholders to strategically confront existing quantum-related technology, standards and workforce gaps, and needs. This year, that groupthe Quantum Economic Development Consortium, or QED-Cbloomed in size, established a more formal membership structure and announced companies that make up its steering committee.

It took a year or more to get all these companies together and then write partnership agreements. So, that partnership agreement was completed towards the beginning of summer, and the steering committee signed it over the summer, and now there are I think 100 companies or so who have signed it, Tahan said. So, it's up and running. It's a real economic development consortiumthats a technical thingand that's a big deal. And how big it is, and how fast it's growing is really, really remarkable.

This fall also brought the launch of quantum.gov, a one-stop website streamlining federal work and policies. The quantum coordination office simultaneously released a comprehensive roadmap pinpointing crucial areas of needed research, deemed the Quantum Frontiers Report.

That assessment incorporates data collected from many workshops, and prior efforts OSTP held to promote the national initiative and establishes eight frontiers that contain core problems with fundamental questions confronting QIS today and must be addressed to push forward research and development breakthroughs in the space. They include expanding opportunities for quantum technologies to benefit society, characterizing and mitigating quantum errors, and more.

It tries to cut through the hype a little bit, Tahan explained. It's a field that requires deep technical expertise. So, it's easy to be led in the wrong direction if you don't have all the data. So we try to narrow it down into here are the important problems, here's what we really don't know, heres what we do know, and go this way, and that will, hopefully benefit the whole enterprise.

Quantum-focused strides have also been made by the U.S. on the international front. Tahan pointed to the first quantum cooperation agreement signed between America and Japan late last year, which laid out basic core values guiding their working together.

We've been using that as a model to engage with other countries. We've had high-level meetings with Australia, industry collaborations with the U.K., and we're engaging with other countries. So, that's progressing, Tahan said. Many countries are interested in quantum as you can guesstheres a lot of investments around the worldand many want to work with us on going faster together.

China had also made its own notable quantum investments (some predating the NQI Act), and touted new claims of quantum supremacy, following Google, on the global stage this year.

I wouldn't frame it as a competition ... We are still very much in the research phase here, and we'll see how those things pan out, Tahan said. I think we're taking the right steps, collectively. The U.S. ecosystem of companies, nonprofits and governments arebased on our strategy, both technical and policiesgoing in the right direction and making the right investments.

Vice President-elect Kamala Harris previously put forthlegislationto broadly advance quantum research, but at this point, the Biden administration hasnt publicly shared any intentions to prioritize government-steered ongoing or future quantum efforts.

[One of] the big things we're looking towards in the next year, is workforce development. We have a critical shortage or need for talent in this space. Its a very diverse set of skills. With these new centers, just do the math. How many students and postdocs are you going to need to fill up those, to do all that research? It's a very large number, Tahan said. And so we're working on something to create that pipeline.

In that light, the team will work to continue to develop NSFs ongoing, Q-12 partnership. Theyll also reflect on whats been built so far through the national initiative to identify any crucial needs that may have been looked over.

As you stand something up thats really big, you're always going to make some mistakes. What have you missed? Tahan noted.

And going forward, the group plans to hone deeper in on balancing the economic and security implications of the burgeoning fields.

As the technology gets more and more advanced, how do we be first to realize everything but also protect our investments? Tahan said. And getting that balance right is going to require careful policy thinking about how to update the way the United States does things.

Go here to see the original:
Two Years into the Government's National Quantum Initiative - Nextgov

Encryption, zero trust and the quantum threat security predictions for 2021 – BetaNews

We've already looked at the possible cybercrime landscape for 2021, but what about the other side of the coin? How are businesses going to set about ensuring they are properly protected next year?

Josh Bregman, COO of CyGlass thinks security needs to put people first, "2020 has been incredibly stressful. Organizations should therefore look to put people first in 2021. Cybersecurity teams are especially stressed. They've been tasked with securing a changing environment where more people than ever before are working remotely. They've also faced new threats as cyber criminals have looked to take advantage of the pandemic: whether through phishing attacks or exploiting weaknesses in corporate infrastructure. Being proactive, encouraging good cyber hygiene and executing a well thought out cyber program will go a long way towards promoting a peaceful and productive 2021, not least because it will build resiliency."

Mary Writz, VP of product management at ForgeRock thinks quantum computing will change how we think about secure access, "When quantum becomes an everyday reality, certain types of encryption and thereby authentication (using encrypted tokens) will be invalidated. Public Key Infrastructure (PKI) and digital signatures will no longer be considered secure. Organizations will need to be nimble to modernize identity and access technology."

Gaurav Banga, CEO and founder of Balbix, also has concerns over quantum computing's effect on encryption, "Quantum computing is likely to become practical soon, with the capability to break many encryption algorithms. Organizations should plan to upgrade to TLS 1.3 and quantum-safe cryptographic ciphers soon. Big Tech vendors Google and Microsoft will make updates to web browsers, but the server-side is for your organization to review and change. Kick off a Y2K-like project to identify and fix your organization's encryption before it is too late."

Sharon Wagner, CEO of Sixgill predicts greater automation, "We'll see organizations ramp up investment in security tools that automate tasks. The security industry has long been plagued by talent shortages, and companies will look toward automation to even the playing field. While many of these automated tools were previously only accessible to large enterprises, much of this technology is becoming available to businesses of all sizes. With this, security teams will be able to cover more assets, eliminate blindspots at scale, and focus more on the most pressing security issues."

Michael Rezek, VP of cybersecurity strategy at Accedian sees room for a blend of tools and education, "As IT teams build out their 2021 cybersecurity strategy, they should look most critically to network detection & response solutions (NDR), and other complementary solutions like endpoint security platforms that can detect advanced persistent threats (APT) and malware. For smaller companies, managed security services such as managed defense and response are also good options. However, a comprehensive security strategy must also include educating all employees about these threats and what to watch out for. Simple cybersecurity practices like varying and updating passwords and not clicking on suspicious links can go a long way in defending against ransomware. Perhaps most importantly, since no security plan is foolproof, companies should have a plan in the event of a ransomware attack. This is especially important since attackers might perform months of reconnaissance before actually striking. Once they have enough data, they'll typically move laterally inside the network in search of other prized data. Many cybercrime gangs will then install ransomware and use the stolen data as a back-up plan in case the organization refuses to pay. The more rapidly you can detect a breach and identify what information was exploited, the better your changes of mitigating this type of loss. Having a plan and the forensic data to back it up will ensure your organization and its reputation are protected."

Amir Jerbi, CTO at Aqua Security, sees more automation too, "As DevOps moves more broadly to use Infrastructure as Code (IaC) to automate provisioning of cloud native platforms, it is only a matter of time before vulnerabilities in these processes are exploited. The use of many templates leaves an opening for attackers to embed deployment automation of their own components, which when executed may allow them to manipulate the cloud infrastructure of their attack targets."

Marlys Rodgers, chief information security officer and head of technology oversight at CSAA Insurance Group, inaugural member of the AttackIQ Informed Defenders Council says, "Despite the global COVID-19 pandemic, businesses still have to function and deliver on their promises to customers. This means adapting and finding new ways to enable employees to be productive from the safety of their homes. As CISO and Head of Technology Oversight for my company, I am dedicated to structuring and sustaining a security program that enables the business, as opposed to restricting capabilities in the name of minimizing risk. Additionally, I believe in complete transparency regarding the company's security posture across all levels, including the C-suite and board, so that we may work together to understand our risk and prioritize security investments accordingly. These two guiding principles have served me well throughout my career, but in 2020 especially, they allowed my company to innovate to better serve our customers while simultaneously scaling the security program."

Devin Redmond CEO and co-founder of Theta Lake believes we'll see more focus on the security of collaboration tools, "Incumbent collaboration tools (Zoom, Teams, Webex) are going to get dragged into conversations about privacy law and big tech, further pressuring them to stay on top of security and compliance capabilities. At least two regulatory agencies will make explicit statements about regulatory obligations to retain and supervise collaboration conversations. Additionally, collaboration tools will replace many call center interactions and force organizations on related compliance, privacy, and security risks."

Cybersecurity needs to become 'baked in' according to Charles Eagan, CTO at BlackBerry:

Cybersecurity is, in all too many ways, an after-market add-on. But this kind of model can become a roadblock to comprehensive security -- like plugging the sink while the faucet is already on.

Take, for instance, the connected vehicle market: vehicles continue to make use of data-rich sensors to deliver safety and comfort features to the driver. But if these platforms aren't built with security as a prerequisite, it's easy to open up a new cyberattack vector with each new feature. In many cases, the data that drives Machine Learning and AI is only useful -- and safe -- if it cannot be compromised. Cybersecurity must become a pillar of product and platform development from day one, instead of added on after the architecture is established.

Tony Lauro, Akamai's director of security technology and strategy thinks multi-factor authentication must become the norm, "Over the past 12 months, attacks against remote workers have increased dramatically, and the techniques used to do so have also increased in complexity. In 2021 security-conscious organizations will be compelled to re-evaluate their requirements for using multi-factor authentication (MFA) technology for solutions that incorporate a strong crypto component to defend against man in the middle and phishing-based 2FA bypasses."

Jerry Ray, COO of enterprise data security and encryption company SecureAge, thinks we'll see greater use of encryption, "Throughout most of 2020, VPNs, access controls, and zero trust user authentication became all the rage in the immediate push to allow employees to work from home. As the year ends and 2021 unfolds, though, a greater appreciation for data encryption has been slowly coming to life. As work from home will continue throughout 2021 and the ploys used by hackers to get into the untamed endpoints become more refined and clever, data that can't be used even if stolen or lost will prove the last, best line of defense."

MikeRiemer, global chief technology officer of Ivanti thinks organizations must adopt zero trust, "As employees continue to work from home, enterprises must come to terms with the reality that it may not be just the employee accessing a company device. Other people, such as a child or spouse, may use a laptop, phone, or tablet and inadvertently download ransomware or other types of software malware. Then, when the employee starts using the device to access a corporate network or specific corporate cloud application, it becomes a rogue device. Without having eyes on employees, how do businesses ensure the user and device are trusted? And what about the application, data and infrastructure? All of these components must be verified on a continual basis every few minutes to maintain a superior secure access posture. That is why organizations must adopt a Zero Trust Access solution capable of handling the hyper-converged technology and infrastructure within today's digital workplace by providing a unified, cloud-based service that enables greater accessibility, efficiency, and risk reduction."

Casey Ellis, CTO, founder, and chairman of Bugcrowd thinks more governments around the world will adopt vulnerability disclosure as a default:

Governments are collectively realizing the scale and distributed nature of the threats they face in the cyber domain, as well as the league of good-faith hackers available to help them balance forces. When you're faced with an army of adversaries, an army of allies makes a lot of sense.

Judging by the language used in the policies released in 2020, governments around the world (including the UK) are also leaning in to the benefit of transparency inherent to a well-run VDP to create confidence in their constituents (neighborhood watch for the internet). The added confidence, ease of explanation, and the fact that security research and incidental discovery of security issues happen whether there is an invitation or not is making this an increasingly easy decision for governments to make.

Image credit: photousvp77/depositphotos.com

Read more here:
Encryption, zero trust and the quantum threat security predictions for 2021 - BetaNews

Top 10 AI and machine learning stories of 2020 – Healthcare IT News

Toward the tail end of pre-pandemic 2019, Mayo Clinic Chief Information Officer Cris Ross stood on a stage in California and declared, "This artificial intelligence stuff is real."

Indeed, while some may argue that AI and machine learning might have been harnessed better during the early days of COVID-19, and while the risk of algorithmic bias is very real, there's little question that artificial intelligence, evolving and maturing by the day for an array of use cases across healthcare.

Here are the most-read stories about AI during this most unusual year.

UK to use AI for COVID-19 vaccine side effects. On a day when vaccines, developed in record time, first begin to be administered in the U.S., it's worth remembering AI's crucial role in helping the world get to this hopefully pivotal moment.

AI algorithm IDs abnormal chest X-rays from COVID-19 patients. Machine learning has been a hugely valuable diagnostic tool as well, as illustrated by this story about a tool from cognitive computing vendor behold.ai that promises 'instant triage" based on lung scans offering faster diagnosis of COVID-19 patients and helping with resource allocation.

How AI use cases are evolving in the time of COVID-19. In a HIMSS20 Digital presentation, leaders from Google Cloud, Nuance and Health Data Analytics Institute shared perspective on how AI and automation were being deployed for pandemic response from the hunt for therapeutics and vaccines to analytics to optimize revenue cycle strategies.

Microsoft launches major $40M AI for Health initiative. The company said the the five-year AI for Health (part of its $165 million AI for Good initiative) will help healthcare organizations around the world deploy with leading edge technologies in the service of three key areas: accelerating medical research, improving worldwide understanding to protect against global health crises such as COVID-19 and reducing health inequity.

How AI and machine learning are transforming clinical decision support. "Todays digital tools only scratch the surface," said Mayo Clinic Platform President Dr. John Halamka. "Incorporating newly developed algorithms that take advantage of machine learning, neural networks, and a variety of other types of artificial intelligence can help address many of the shortcomings of human intelligence."

Clinical AI vendor Jvion unveils COVID Community Vulnerability Map. In the very early days of the pandemic, clinical AI company Jvion launched this intereactive map, which tracks the social determinants of health, helping identify populations down to the census-block level that are at risk for severe outcomes.

AI bias may worsen COVID-19 health disparities for people of color. An article in the Journal of the American Medical Informatics Association asserts that biased data models could further the disproportionate impact the COVID-19 pandemic is already having on people of color. "If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden," said researchers.

The origins of AI in healthcare, and where it can help the industry now. "The intersection of medicine and AI is really not a new concept," said Dr. Taha Kass-Hout, director of machine learning and chief medical officer at Amazon Web Services. (There were limited chatbots and other clinical applications as far back as the mid-60s.) But over the past few years, it has become ubiquitous across the healthcare ecosystem. "Today, if youre looking at PubMed, it cites over 12,000 publications with deep learning, over 50,000 machine learning," he said.

AI, telehealth could help address hospital workforce challenges. "Labor is the largest single cost for most hospitals, and the workforce is essential to the critical mission of providing life-saving care," noted a January American Hospital Association report on the administrative, financial, operational and clinical uses of artificial intelligence. "Although there are challenges, there also are opportunities to improve care, motivate and re-skill staff, and modernize processes and business models that reflect the shift toward providing the right care, at the right time, in the right setting."

AI is helping reinvent CDS, unlock COVID-19 insights at Mayo Clinic. In a HIMSS20 presentation, JohnHalamka shared some of the most promising recent clinical decision support advances at the Minnesota health system and described how they're informing treatment decisions for an array of different specialties and helping shape its understanding of COVID-19. "Imagine the power [of] an AI algorithm if you could make available every pathology slide that has ever been created in the history of the Mayo Clinic," he said. "That's something we're certainly working on."

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.comHealthcare IT News is a HIMSS publication.

Read the original post:
Top 10 AI and machine learning stories of 2020 - Healthcare IT News

Machine learning and statistical prediction of patient quality-of-life after prostate radiation therapy. – UroToday

Thanks to advancements in diagnosis and treatment, prostate cancer patients have high long-term survival rates. Currently, an important goal is to preserve quality of life during and after treatment. The relationship between the radiation a patient receives and the subsequent side effects he experiences is complex and difficult to model or predict. Here, we use machine learning algorithms and statistical models to explore the connection between radiation treatment and post-treatment gastro-urinary function. Since only a limited number of patient datasets are currently available, we used image flipping and curvature-based interpolation methods to generate more data to leverage transfer learning. Using interpolated and augmented data, we trained a convolutional autoencoder network to obtain near-optimal starting points for the weights. A convolutional neural network then analyzed the relationship between patient-reported quality-of-life and radiation doses to the bladder and rectum. We also used analysis of variance and logistic regression to explore organ sensitivity to radiation and to develop dosage thresholds for each organ region. Our findings show no statistically significant association between the bladder and quality-of-life scores. However, we found a statistically significant association between the radiation applied to posterior and anterior rectal regions and changes in quality of life. Finally, we estimated radiation therapy dose thresholds for each organ. Our analysis connects machine learning methods with organ sensitivity, thus providing a framework for informing cancer patient care using patient reported quality-of-life metrics.

Computers in biology and medicine. 2020 Nov 28 [Epub ahead of print]

Zhijian Yang, Daniel Olszewski, Chujun He, Giulia Pintea, Jun Lian, Tom Chou, Ronald C Chen, Blerta Shtylla

New York University, New York, NY, 10012, USA; Applied Mathematics and Computational Science Program, University of Pennsylvania, Philadelphia, PA, 19104, USA., Carroll College, Helena, MT, 59625, USA; Computer, Information Science and Engineering Department, University of Florida, Gainesville, FL, 32611, USA., Smith College, Northampton, MA, 01063, USA., Simmons University, Boston, MA, USA; Department of Psychology, Tufts University, Boston, MA, 02111, USA., Department of Radiation Oncology, The University of North Carolina, Chapel Hill, NC, 27599, USA., Depts. of Computational Medicine and Mathematics, UCLA, Los Angeles, CA, 90095-1766, USA., Department of Radiation Oncology, University of Kansas Medical Center, Kansas City, KS, 66160, USA., Department of Mathematics, Pomona College, Claremont, CA, 91711, USA; Early Clinical Development, Pfizer Worldwide Research, Development, and Medical, Pfizer Inc, San Diego, CA, 92121, USA. Electronic address: .

PubMed http://www.ncbi.nlm.nih.gov/pubmed/33333364

Read more:
Machine learning and statistical prediction of patient quality-of-life after prostate radiation therapy. - UroToday

Enhancing Machine-Learning Capabilities In Oil And Gas Production – Texas A&M University Today

Machine-learning processes are invaluable at mining data for patterns in oil and gas production, but are generally limited in interpreting the information for decision-making needs.

Getty Images

Both a machine-learning algorithm and an engineer can predict if a bridge is going to collapse when they are given data that shows a failure might happen. Engineers can interpret the data based on their knowledge of physics, stresses and other factors, and state why they think the bridge is going to collapse. Machine-learning algorithms generally cant give an explanation of why a system would fail because they are limited in terms of interpretability based on scientific knowledge.

Since machine-learning algorithms are tremendously useful in many engineering areas, such as complex oil and gas processes, Petroleum Engineering Professor Akhil Datta-Gupta is leading Texas A&M Universitys participation in a multi-university and national laboratory project to reduce this limitation. The project began Sept. 2 and was initially funded by the U.S. Department of Energy (DOE). He and the other participants will inject science-informed decision-making into machine-learning systems, creating an advanced evaluation system that can assist with the interpretation of reservoir production processes and conditions while they happen.

Hydraulic fracturing operations are complex. Data is continually recorded during production processes so it can be evaluated and modeled to simulate what happens in a reservoir during the injection and recovery processes. However, these simulations are time-consuming to make, meaning they are not available during production and are more of a reference or learning tool for the next operation.

Enhanced by Datta-Guptas fast marching method, machine-learning systems can quickly compress data so they can render how fluid movements change in a reservoir during actual production processes.

Courtesy of Akhil Datta-Gupta

The DOE project will create an advanced system that will quickly sift data produced during hydraulic fracturing operations through physics-enhanced machine-learning algorithms, which will filter the outcomes using past observed experiences, and then render near real-time changes to reservoir conditions during oil recovery operations. These rapid visual evaluations will allow oil and gas operators to see, understand and effectively respond to real-time situations. The time advantage permits maximum production in areas that positively respond to fracturing, and stops unnecessary well drilling in areas that show limited response to fracturing.

It takes considerable effort to determine what changes occur in the reservoir, said Datta-Gupta, a University Distinguished Professor and Texas A&M Engineering Experiment Station researcher. This is why speed becomes critical. We are trying to do a near real-time analysis of the data, so engineering operations can make decisions almost on the fly.

The Texas A&M teams first step will focus on evaluating shale oil and gas field tests sponsored with DOE funding and identifying the machine-learning systems to use as the platform for the project. Next, they will upgrade these systems to merge multiple types of reservoir data, both actual and synthetic, and evaluate each system on how well it visualizes underground conditions compared to known outcomes.

At this point, Datta-Guptas research related to the fast marching method (FMM) for fluid front tracking will be added to speed up the systems visual calculations. FMM can rapidly sift through, track and compress massive amounts of data in order to transform the 3D aspect of reservoir fluid movements into a one-dimensional form. This reduction in complexity allows for the simpler, and faster, imaging.

Using known results from recovery processes in actual reservoirs, the researchers will train the system to understand changes the data inputs represent. The system will simulate everyday information, like fluid flow direction and fracture growth and interactions, and show how fast reservoir conditions change during actual production processes.

We are not the first to use machine-learning in petroleum engineering, Datta-Gupta said. But we are pioneering this enhancement, which is not like the usual input-output relationship. We want complex answers, ones we can interpret to get insights and predictions without compromising speed or production time. I find this very exciting.

Excerpt from:
Enhancing Machine-Learning Capabilities In Oil And Gas Production - Texas A&M University Today

How AWS’s five tenets of innovation lend themselves to machine learning – Information Age

Swami Sivasubramanian, vice-president of machine learning at AWS, spoke about the five tenets of innovation that AWS strives towards while announcing new machine learning tools, during AWS re:Invent

AWS vice-president of machine learning, Swami Sivasubramanian, announced new machine learning capabilities during re:Invent

As machine learning disrupts more and more industries, it has demonstrated its potential to reduce time spent by employees on manual tasks. However, training machine learning models can take months to achieve, creating excessive costs.

With this in mind, AWS vice-president of machine learning, Swami Sivasubramanian used his keynote speech at AWS re:Invent to announce new tools that aim to speed up operations and save costs. Sivasubramanian went through five tenets for machine learning that AWS observes, which acted as vessels for further explanations of use cases for the new tools.

Firstly, Sivasubramanian explained the importance of providing firm foundations, vital for freedom of creativity. The technology has provided foundations for autonomous vehicles and robotic communication, among other budding spaces. One drawback of machine learning, however, is that a single framework is yet to be established for all practitioners, with Tensorflow, Pytorch and Mxnet being the main three.

AWS SageMaker, the cloud service providers machine learning service, has been able to speed up training processes. During the keynote, availability of faster distribution training on Amazon SageMaker was announced, which is predicted to complete training up to 40% faster than before and can allow for completion in the space of a few hours.

This article explores the ways in which Kubernetes enhances the use of machine learning (ML) within the enterprise. Read here

From preparing and optimising data and algorithms to training and deployment, machine learning training can be time-consuming and costly. AWS released SageMaker in 2017 to break down barriers for budding data engineers.

Following its predecessor, SageMaker, Data Wrangler was launched during re:Invent to accelerate data preparation, which commonly takes up most of the time spent on training machine learning algorithms. This tool allows for the preparation of data from multiple sources without the need to write code. With more than 300 data transformations, Data Wrangler can cut the time taken to aggregate and prepare data from weeks to minutes.

To then make it even easier for builders to reach their project goals in the quickest time possible, the Sagemaker Feature Store was launched, which allows features to stay in sync with each other and aggregate data faster.

Sagemaker Pipelines is another new tool which allows developers to leverage end-to-end continuous integration and delivery.

There is also a need to understand and eradicate biases, and in response to this, AWS announced Sagemaker Clarify. This tool works in four steps; by detecting bias during analyses with algorithms before delivering a report which allows steps to be taken; models are checked for unbalanced data, and once deployed, a report is given for each input for prediction, which helps to provide information to customers. Bias detection can be carried out over time, with notifications being given if any bias is found.

As artificial intelligence becomes more prevalent throughout business and society, companies need to be mindful of human bias creeping into their machine models. Richard Downs, UK director at Applause discusses how businesses can use the wisdom of crowds to source the diverse set of data and inputs needed to train algorithms. Read here

John Loughlin, chief technologist in data and analytics at Cloudreach, said: The Clarify product really caught my eye, because bias is an important problem that we need to address, so that people maintain their trust in these kinds of technology. We dont want adoption to be impeded because models arent doing what theyre supposed to.

Also announced during the keynote was deep profiling for Sagemaker Debugger, which allows builders to monitor performance in order to move the training process along faster.

With the aim of making machine learning accessible to as many builders as possible, SageMaker Autopilot was introduced last year to provide recommendations on the best models for any project. The tool features added visibility, showing users how models are built, and ranking models using a leaderboard, before one is decided on.

Integration of this kind of technology for databases, data warehouses, data lakes and business intelligence (BI) tools were referred to as future frontiers that customers have been demanding, and machine learning tools were announced for Redshift and Neptune during the keynote. While capabilities for Redshift make it possible to get predictions for data warehouses starting from a SQL query, ML for Neptune can make predictions for connected datasets without the need for prior experience in using the technology.

Brad Campbell, chief technologist in platform development at Cloudreach, said: What stands out when I look at ML for Redshift is that what you have in Redshift, which you dont get in other data sources, is the true composite of your businesss end-to-end value chain in one place.

Typically when Ive worked in Redshift, there was a lot of ETL work to be done, but with ML, this can really unlock value for people who have all this end-to-end value chain data coalesced in a data warehouse.

Another recently launched tool, Amazon Quicksight ML, provides stories of data dashboards in natural language, cutting the time spent on gaining business intelligence information from days or weeks to seconds. The tool takes into consideration the different terms that various departments within an organisation may use, meaning that the tool can be used by any member of staff, regardless of the department they work in.

Kevin Davis, cloud strategist at Cloudreach, said: There is another push in this area to lower the bar of entry for ML consumption in the business space. There is a broadening of scope for people who can implement these services, and a lot of horizontal integration for ML capabilities, along with some deep vertical implementation capabilities.

Yair Green, CTO at GlobalDots, explains how artificial intelligence and machine learning changed the Software-as-a-Service industry. Read here

Without considering problems that the business needs to solve, no project can be truly successful. According to Sivasubramanian, any good machine learning problem to focus on is rich in data, impacts the business, but cant be solved using traditional methods.

AI-powered tools such as Code Guru, DevOps Guru, Connect and Kendra from AWS allow staff to quickly solve business problems that arise within DevOps, call centres and intelligent search services, which can range from performance issues to customer complaints.

During the keynote, the launch of Amazon Lookout for Metrics was announced, which will allow developers to find anomalies within their machine learning models, with the tool ranking them according to severity. This ensures that models are working as they should be.

The caveat I have around Lookout for Metrics is that its clearly directed, and intended to look at the most common business insights, said Davis.

In terms of generally lowering the bar of entry, you can potentially put this in the hands of business analysts that are familiar enough with SQL queries, and allow them to directly pull insights or anomalies from business data stores.

For the healthcare sector, AWS also announced the launch of Amazon Healthlake, which provides an analysis of patient data that would otherwise be difficult to make conclusions on due to its usually unstructured nature.

Commenting on the release of Amazon Healthlake, Samir Luheshi, chief technologist in application modernisation at Cloudreach, said: Healthlake stands out as very interesting. There are a lot of challenges around managing HIPAA and EU GDPR, and its not an easy lift, so Id be interested to see how extra layers can be applied to this to make it suitable for consumption in Europe.

Andrew Pellegrino, director of intelligent automation at DataRobot, analyses RPA and the rise of intelligent automation in healthcare. Read here

Just as algorithms need to be learned so that tasks can be automated effectively, the final tenet of ML discussed by Sivasubramanian calls for companies that deploy machine learning to encourage their engineers to continuously learn new skills and technologies, if they arent doing so already.

AWS has been looking to educate the next generation of builders through its own Machine Learning University, which offers solution-based machine learning training and certification, and where budding builders can learn from AWS practitioners. Learners can also develop skills specific to a particular job role, such as a cloud architect or cloud developer.

Furthermore, AWS DeepRacer, the cloud service providers 3D racing simulator, allows developers of any skill level to learn the essentials of reinforcement learning, and submit models in an aim to win races. The decision making of models can be evaluated with the aid of a 1/18th scale car thats driven by machine learning.

Read the original here:
How AWS's five tenets of innovation lend themselves to machine learning - Information Age

Machine-learning, robotics and biology to deliver drug discovery of tomorrow – – pharmaphorum

Biology 2.0: Combining machine-learning, robotics and biology to deliver drug discovery of tomorrow

Intelligent OMICS, Arctoris and Medicines Discovery Catapult test in silico pipeline for identifying new molecules for cancer treatment.

Medicines discovery innovators, Intelligent OMICS, supported by Arctoris and Medicines Discovery Catapult, are applying artificial intelligence to find new disease drivers and candidate drugs for lung cancer. This collaboration, backed by Innovate UK, will de-risk future R&D projects and also demonstrate new cost and time-saving approaches to drug discovery.

Analysing a broad set of existing biological information, previously hidden components of disease biology can be identified which in turn lead to the identification of new drugs for development. This provides the catalyst for an AI-driven acceleration in drug discovery and the team has just won a significant Innovate UK grant in order to prove that it works.

Intelligent OMICS, the company leading the project, use in silico (computer-based) tools to find alternative druggable targets. They have already completed a successful analysis of cellular signalling pathways elsewhere in lung cancer pathways and are now selectively targeting the KRAS signalling pathway.

As Intelligent OMICS technology identifies novel biological mechanisms, Medicines Discovery Catapult will explore the appropriate chemical tools and leads that can be used against these new targets, and Arctoris will use their automated drug discovery platform in Oxford to conduct the biological assays which will validate them experimentally.

Working together, the group will provide druggable chemistry against the entire in silico pipeline, offering new benchmarks of cost and time effectiveness over conventional methods of discovery.

Much has been written about the wonders of artificial intelligence and its potential in healthcare, says Dr Simon Haworth, CEO of Intelligent OMICS. Our newsflows are full of details of AI applications in process automation, image analysis and computational chemistry. The DeepMind protein folding breakthrough has also hit the headlines recently as a further AI application. But what does Intelligent OMICS do that is different?

By analysing transcriptomic and similar molecular data our neural networks algorithms re-model known pathways and identify new, important targets. This enables us to develop and own a broad stream of new drugs. Lung cancer is just the start we have parallel programs running in many other areas of cancer, in infectious diseases, in auto-immune disease, in Alzheimers and elsewhere.

We have to thank Innovate UK for backing this important work. The independent validation of our methodology by the highly respected cheminformatics team at MDC coupled with the extraordinarily rapid, wet lab validation provided by Arctoris, will finally prove that, in drug discovery, the era of AI has arrived.

Dr Martin-Immanuel Bittner, Chief Executive Officer of Arctoris commented:

We are thrilled to combine our strengths in robotics-powered drug discovery assay development and execution with the expertise in machine learning that Intelligent OMICS and Medicines Discovery Catapult possess. This unique setup demonstrates the next stage in drug discovery evolution, which is based on high quality datasets and machine intelligence. Together, we will be able to rapidly identify and validate novel targets, leading to promising new drug discovery programmes that will ultimately benefit patients worldwide.

Prof. John P. Overington, Chief Informatics Officer at Medicines Discovery Catapult:

Computational based approaches allow us to explore a top-down approach to identifying novel biological mechanisms of disease, which critically can be validated by selecting the most appropriate chemical modulators and assessing their effects in cellular assay technologies.

Working with Intelligent OMICS and with support from Arctoris we are delighted to play our part in laying the groundwork for computer-augmented, automated drug discovery. Should these methods indeed prove fruitful, it will be transformative for both our industry and patients alike.

If this validation is successful, the partners will have established a unique pipeline of promising new targets and compounds for a specific pathway in lung cancer. But more than that they will also have validated an entirely new drug discovery approach which can then be further scaled to other pathways and diseases.

Follow this link:
Machine-learning, robotics and biology to deliver drug discovery of tomorrow - - pharmaphorum

How This CEO is Using Synthetic Data to Reshape Machine Learning for Real-World Applications – Yahoo Finance

Artificial Intelligence (AI) and Machine Learning (ML) are certainly not new industries. As early as the 1950s, the term machine learning was introduced by IBM AI pioneer Arthur Samuel. It has been in recent years wherein AI and ML have seen significant growth. IDC, for one, estimates the market for AI to be valued at $156.5 billion in 2020 with a 12.3 percent growth over 2019. Even amid global economic uncertainties, this market is set to grow to $300 billion by 2024, a compound annual growth of 17.1 percent.

There are challenges to be overcome, however, as AI becomes increasingly interwoven into real-world applications and industries. While AI has seen meaningful use in behavioral analysis and marketing, for instance, it is also seeing growth in many business processes.

"The role of AI Applications in enterprises is rapidly evolving. It is transforming how your customers buy, your suppliers deliver, and your competitors compete. AI applications continue to be at the forefront of digital transformation (DX) initiatives, driving both innovation and improvement to business operations," said Ritu Jyoti, program vice president, Artificial Intelligence Research at IDC.

Even with the increasing utilization of sensors and internet-of-things, there is only so much that machines can learn from real-world environments. The limitations come in the form of cost and replicable scenarios. Heres where synthetic data will play a big part

Dor Herman

We need to teach algorithms what it is exactly that we want them to look for, and thats where ML comes in. Without getting too technical, algorithms need a training process, where they go through incredible amounts of annotated data, data that has been marked with different identifiers. And this is, finally, where synthetic data comes in, says Dor Herman, Co-Founder and Chief Executive Officer of OneView, a Tel Aviv-based startup that accelerates ML training with the use of synthetic data.

Story continues

Herman says that real-world data can oftentimes be either inaccessible or too expensive to use for training AI. Thus, synthetic data can be generated with built-in annotations in order to accelerate the training process and make it more efficient. He cites four distinct advantages of using synthetic data over real-world data in ML: cost, scale, customization, and the ability to train AI to make decisions on scenarios that are not likely to occur in real-world scenarios.

You can create synthetic data for everything, for any use case, which brings us to the most important advantage of synthetic data--its ability to provide training data for even the rarest occurrences that by their nature dont have real coverage.

Herman gives the example of oil spills, weapons launches, infrastructure damage, and other such catastrophic or rare events. Synthetic data can provide the needed data, data that could have not been obtained in the real world, he says.

Herman cites a case study wherein a client needed AI to detect oil spills. Remember, algorithms need a massive amount of data in order to learn what an oil spill looks like and the company didnt have numerous instances of oil spills, nor did it have aerial images of it.

Since the oil company utilized aerial images for ongoing inspection of their pipelines, OneView applied synthetic data instead. we created, from scratch, aerial-like images of oil spills according to their needs, meaning, in various weather conditions, from different angles and heights, different formations of spills--where everything is customized to the type of airplanes and cameras used.

This would have been an otherwise costly endeavor. Without synthetic data, they would never be able to put algorithms on the detection mission and will need to continue using folks to go over hours and hours of detection flights every day.

With synthetic data, users can define the parameters for training AI, in order for better decision-making once real-world scenarios occur. The OneView platform can generate data customized to their needs. An example involves training computer vision to detect certain inputs based on sensor or visual data.

You input your desired sensor, define the environment and conditions like weather, time of day, shooting angles and so on, add any objects-of-interest--and our platform generates your data; fully annotated, ready for machine learning model training datasets, says Herman.

Annotation also has advantages over real-world data, which will often require manual annotation, which takes extensive time and cost to process. The swift and automated process that produces hundreds of thousands of images replaces a manual, prolonged, cumbersome and error-prone process that hinders computer vision ML algorithms from racing forward, he adds.

OneViews synthetic data generation involves a six-layer process wherein 3D models are created using gaming engines and then flattened to create 2D images.

We start with the layout of the scene so to speak, where the basic elements of the environment are laid out The next step is the placement of objects-of-interest that are the goal of detection, the objects that the algorithms will be trained to discover. We also put in distractors, objects that are similar so the algorithms can learn how to differentiate the goal object from similar-looking objects. Then the appearance building stage follows, when colors, textures, random erosions, noises, and other detailed visual elements are added to mimic how real images look like, with all their imperfections, Herman shares.

The fourth step involves the application of conditions such as weather and time of the day. For the fifth step, sensor parameters (the camera lens type) are implemented, meaning, we adapt the entire image to look like it was taken by a specific remote sensing system, resolution-wise, and other unique technical attributes each system has. Lastly, annotations are added.

Annotations are the marks that are used to define to the algorithm what it is looking at. For example, the algorithm can be trained that this is a car, this is a truck, this is an airplane, and so on. The resulting synthetic datasets are ready for machine learning model training.

For Herman, the biggest contribution of synthetic data is actually paradoxical. By using synthetic data, AI and AI users get a better understanding of the real world and how it works--through machine learning. Image analytics comes with bottlenecks in processing, and computer vision algorithms cannot scale unless this bottleneck is overcome.

Remote sensing data (imagery captured by satellites, airplanes and drones) provides a unique channel to uncover valuable insights on a very large scale for a wide spectrum of industries. In order to do that, you need computer vision AI as a way to study these vast amounts of data collected and return intelligence, Herman explains.

Next, this intelligence is transformed to insights that help us better understand this planet we live on, and of course drive decision making, whether by governments or businesses. The massive growth in computing power enabled the flourishing of AI in recent years, but the collection and preparation of data for computer vision machine learning is the fundamental factor that holds back AI.

He circles back to how OneView intends to reshape machine learning: releasing this bottleneck with synthetic data so the full potential of remote sensing imagery analytics can be realized and thus a better understanding of earth emerges.

The main driver behind Artificial Intelligence and Machine Learning is, of course, business and economic value. Countries, enterprises, businesses, and other stakeholders benefit from the advantages that AI offers, in terms of decision-making, process improvement, and innovation.

The Big message OneView brings is that we enable a better understanding of our planet through the empowerment of computer vision, concludes Herman. Synthetic data is not fake data. Rather, it is purpose-built inputs that enable faster, more efficient, more targeted, and cost-effective machine learning that will be responsive to the needs of real-world decision-making processes.

Continue reading here:
How This CEO is Using Synthetic Data to Reshape Machine Learning for Real-World Applications - Yahoo Finance

Unlock Insights From Business Documents With Revv’s Metalens, a Machine Learning Based Document Analyzer – Business Wire

PALO ALTO, Calif.--(BUSINESS WIRE)--Businesses run on documents as documents help build connections. They cement relationships and enable trust and transparency between stakeholders. Documents bring certainty, continuity, and clarity. When it comes to reviewing documents, most intelligence platforms perceive documents for their language content. A business document is not just written text, its a record of information and data - from simple entities such as names or addresses to more nuanced ones such as notice period or renewal dates - this information is required to optimize workflows and processes. Revv recently added Metalens, an intelligent document analyzer that breaks this barrier and applies artificial intelligence to extract data and intent from business documents to scale up business processes.

Metalens allows users to extract relevant information and identify potential discussion points from any document (pdf or Docx) within Revv. This extracted data can be reused to set up workflows, feed downstream business apps with relevant information, and optimize business processes. Think itinerary processing, financial compliance, auditing, renewal follow-up, invoice processing, and so on, all identified and automated. The feature improves process automation, which is otherwise riddled with copy-pasting errors and other manual data entry bottlenecks.

Rishi Kulkarni, the co-founder, adds, Revvs Metalens feature is fast, efficient, and a powerful element that sifts through the content and turns your documents into datasets. This unlocks new insights that allow our users to empower themselves and align their businesses for growth.

Metalens is another aspect of Revvs intelligence layer used to understand document structure and compare and review contracts with current industry standards. Businesses can identify their risk profile and footprint in half the time, with half the resources. It helps to get a grip on the intent of business documents and ensure your business objectives are met.

With Metalens, users can -

Excited about this new feature, Sameer Goel, co-founder, adds, The impact of this intelligent layer is clear and immediate as it is able to process complex documents with legalese and endless text thats easy to miss. It can process unstructured and structured document data even when datasets formats and locations change over time. This machine learning approach provides users with an alternative solution that allows them to circumvent their dependence on intimately knowing the document to extract information from it.

Revvs new Metalens feature gives its users the speed and flexibility to generate meaningful insights and accelerate business outcomes by putting machine learning front and center. It quickens the review process and makes negotiation smoother. It brings transparency that helps reduce errors and lets users save time and effort.

Metalens is part of Revvs larger offering designed to simplify business paperwork. Revv is an all-in-one document platform that brings together the power of eSignature, an exhaustive template library, a drag-n-drop editor, payments and Gsheet integrations, and API connections. Specially designed for owner-operators, consultants, agencies, and service providers who want a simple no-code tool to manage their business paperwork, Revv gives them the ability to draft, edit, share online, eSign, collect payments, and centrally store documents with one tool.

About Revv:

Backed by Lightspeed, Matrix Partners, and Arka Ventures, Revv was founded by Freshworks alumni Rishi Kulkarni and Sameer Goel in 2018. With operations in Silicon Valley and Bangalore, India, Revv is designed as a document management system for entrepreneurs. As of now, Revv has more than 3000+ businesses trusting the platform and is poised for even greater growth with features like attaching supporting media/doc files, multi-language support, bulk creation of documents, and even user groups.

Continued here:
Unlock Insights From Business Documents With Revv's Metalens, a Machine Learning Based Document Analyzer - Business Wire

Embedded AI and Machine Learning Adding New Advancements In Tech Space – Analytics Insight

Throughout the most recent years, as sensor and MCU costs dove and shipped volumes have gone through the roof, an ever-increasing number of organizations have attempted to exploit by adding sensor-driven embedded AI to their products.

Automotive is driving the trend the average non-autonomous vehicle presently has 100 sensors, sending information to 30-50 microcontrollers that run about 1m lines of code and create 1TB of data per vehicle every day. Extravagance vehicles may have twice the same number of, and autonomous vehicles increase the sensor check significantly more drastically.

Yet, its not simply an automotive trend. Industrial equipment is turning out to be progressively brilliant as creators of rotating, reciprocating and other types of equipment rush to add usefulness for condition monitoring and predictive support, and a huge number of new consumer products from toothbrushes, to vacuum cleaners, to fitness monitors add instrumentation and smarts.

An ever-increasing number of smart devices are being introduced each month. We are now at a point where artificial intelligence and machine learning in its exceptionally essential structure has discovered its way into the core of embedded devices. For example, smart home lighting systems that automatically turn on and off depend on whether anybody is available in the room. By all accounts, the system doesnt look excessively stylish. Yet, when you consider everything, you understand that the system is really settling on choices all alone. In view of the contribution from the sensor, the microcontroller/SOC concludes if to turn on the light or not.

To do all of this simultaneously, defeating variety to achieve troublesome detections in real-time, at the edge, inside the vital limitations isnt at all simple. In any case, with current tools, integrating new options for machine learning for signals (like Reality AI) it is getting simpler.

They can regularly achieve detections that escape traditional engineering models. They do this by making significantly more productive and compelling utilization of data to conquer variation. Where traditional engineering approaches will ordinarily be founded on a physical model, utilizing data to appraise parameters, machine learning approaches can adapt autonomously of those models. They figure out how to recognize signatures straightforwardly from the raw information and utilize the mechanics of machine learning (mathematics) to isolate targets from non-targets without depending on physical science.

There are a lot of different regions where the convergence of machine learning and embedded systems will prompt great opportunities. Healthcare, for example, is now receiving the rewards of putting resources into AI technology. The Internet of Things or IoT will likewise profit enormously from the introduction of artificial intelligence. We will have smart automation solutions that will prompt energy savings, cost proficiency as well as the end of human blunder.

Forecasting is at the center of so many ML/AI conversations as organizations hope to use neural networks and deep learning to conjecture time series data. The worth is the capacity to ingest information and quickly acknowledge insight into how it changes the long-term outlook. Further, a large part of the circumstance relies upon the global supply chain, which makes improvements significantly harder to precisely project.

Probably the most unsafe positions in production lines are as of now being dealt by machines. Because of the advancement in embedded electronics and industrial automation, we have ground-breaking microcontrollers running the whole mechanical production systems in assembling plants. However, the majority of these machines are not exactly completely automatic and still require a type of human intercession. In any case, the time will come when the introduction of machine learning will help engineers concoct truly intelligent machines that can work with zero human mediation.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read more:
Embedded AI and Machine Learning Adding New Advancements In Tech Space - Analytics Insight

AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment – ZDNet

Spain has been one the European states worst hit by the COVID-19 pandemic, with more than 1.7 million detected cases. Despite the second wave of infections that has hit the country over the past few months, the Hospital Clinic in Barcelona has succeeded in halving mortality among its coronavirus patients using artificial intelligence.

The Catalan hospital has developed a machine-learning tool that can predict when a COVID patient will deteriorate and how to customize that individual's treatment to avoid the worst outcome.

"When you have a sole patient who's in a critical state, you can take special care of them. But when they are 700 of them, you need this kind of tool," says Carol Garcia-Vidal, a physician specialized in infectious diseases and IDIBAPS researcher who has led the development of the tool.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Before the pandemic, the hospital had already been working on software to turn variable data into an analyzable form. So when the hospital started to receive COVID patients in March, it put the system to work analyzing three trillion pieces of structured and anonymized data from 2,000 patients.

The goal was to train it to recognize patterns and check what treatments were the most effective for each patient and when they should be administered.

That work underlined to Garcia-Vidal and her team that the virus doesn't manifest itself in the same way for everyone. "There are patients with an inflammatory response, patients with coagulopathies and patients who develop super infections," Garca-Vidal tells ZDNet. Each group needs different drugs and thus a personalized treatment.

Thanks to an EIT Health grant, the AI system has been developed into a real-time dashboard display on physicians' computers that has become one of their everyday tools. Under the supervision of an epidemiologist, the tool enables patients to be classified and offered a more personalized treatment.

"Nobody has done this before," says Garca-Vidal, who says the researchers recently added two more patterns to the system to include the patients who are stable and can leave the hospital, thus freeing a bed, and those patients who are more likely to die. The predictions are 90% accurate.

"It's very useful for physicians with less experience and those who have a specialty that's nothing to do with COVID, such as gynecologists or traumatologists," she says. As in many countries, doctors from all specialist areas were called in to treat patients during the first wave of the pandemic.

The system is also being used during the current second wave because, according to Garca-Vidal, the number of patients in intensive care in Catalan hospitals has jumped. The plan is to make the tool available to other hospitals.

Meanwhile, the Barcelona Supercomputing Center (BSC) is also analyzing a set of data corresponding to 3,000 medical cases generated by the Hospital Clnic during the acute phase of the pandemic in March.

The aim is to develop a model based on deep-learning neural networks that will look for common patterns and generate predictions on the evolution of symptoms. The objective is to know whether a patient is likely to need a ventilator system or be directly sent to intensive care.

SEE: The algorithms are watching us, but who is watching the algorithms?

Some data such as age, sex, vital signs and medication given is structured but other data isn't, because it consists of text written in natural language in the form of, for example, hospital discharge and radiology reports, BSC researcher Marta Villegas explains.

Supercomputing brings the computational capacity and power to extract essential information from these reports and train models based on neural networks to predict the evolution of the disease as well as the response to treatments given the previous conditions of the patients.

This approach, based on natural language processing, is also being tested at a hospital in Madrid.

Go here to see the original:
AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment - ZDNet

Hateful Memes Challenge Winners Machine Learning Times – The Predictive Analytics Times

By: Douwe Kiela, Hamed Firooz and Tony Nelli Originally published in Facebook AI, Dec 11, 2020.

AI has made progress in detecting hate speech, but important and difficult technical challenges remain. Back in May 2020, Facebook AI partnered with Getty Images and DrivenData to launch the Hateful Memes Challenge, a first-of-its-kind $100K competition and data set to accelerate research on the problem of detecting hate speech that combines images and text. As part of the challenge, Facebook AI created a unique data set of 10,000+ new multimodal examples, using licensed images from Getty Images so that researchers could easily use them in their work.

More than 3,300 participants from around the world entered the Hateful Memes Challenge, and we are now sharing details on the winning entries. The top-performing teams were:

Ron Zhu link to code

Niklas Muennighoff link to code

Team HateDetectron: Riza Velioglu and Jewgeni Rose link to code

Team Kingsterdam: Phillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova and Helen Yannakoudakis link to code

Vlad Sandulescu link to code

You can see the full leaderboard here. As part of the NeurIPS 2020 competition track, the top five winners will discuss their solutions and we facilitated a Q&A with participants from around the world. Each of these five implementations has been made open source and is available now.

To continue reading this article, click here.

View post:
Hateful Memes Challenge Winners Machine Learning Times - The Predictive Analytics Times

Google’s Blob Opera combines machine learning with animated operatics – Newstalk ZB

With school out for the year and many taking their summer break, many families will be looking for something fun to do over the next few weeks.

Google's latest machine-learning game may be one way to pass the time, thanks to Blob Opera.

Four actual opera singers Christian Joel (tenor), Frederick Tong (bass), Joanna Gamble (mezzosoprano), and Olivia Doutney (soprano) recorded 16 hours of singing and their voices were used to train amachine learning model to create an algorithm for whatopera sounds like mathematically.

The algorithm was then combined with for very cute blob characters which represent the different opera voice typesand you can move them around to make them sing different notes. The algorithm then does it's magic and calculates how the other 3 blobs should sing to perfectly harmonise with your blob allowing you to compose opera of your own without having to sing a note!

Michelle Dickinson joined Francesca Rudkin to explain what this means.

LISTEN ABOVE

View post:
Google's Blob Opera combines machine learning with animated operatics - Newstalk ZB

Machine learning in human resources: how it works & its real-world applications – iTMunch

According to research conducted by Glassdoor, on average, the entire interview process conducted by companies in the United Stated usually takes about 22.9 days and the same in Germany, France and the UK takes 4-9 days longer [1]. Another research by the Society for Human Resources that studied data from more than 275,000 members in 160 countries found that the average time taken to fill a position is 42 days [2]. Clearly, hiring is a time-consuming and tedious process. Groundbreaking technologies like cloud computing, big data, augmented reality, virtual reality, blockchain technology and the Internet of Things can play a key role in making this process move faster. Machine learning in human resources is one such technology that has made the recruitment process not just faster but more effective.

Machine learning (ML) is treated as a subset of artificial intelligence (AI). AI is a branch of computer science which deals with building smart machines that are capable of performing certain tasks that typically require human intelligence. Machine learning, by definition, is the study of algorithms that enhance itself automatically over time with more data and experience. It is the science of getting machines (computers) to learn how to think and act like humans. To improve the learnings of a machine learning algorithm, data is fed into it over time in the form of observations and real-world interactions.The algorithms of ML are built on models based on sample or training data to make predictions and decisions without being explicitly programmed to do so.

Machine learning in itself is not a new technology but its integration with the HR function of organizations has been gradual and only recently started to have an impact. In this blog, we talk about how machine learning has contributed in making HR processes easier, how it works and what are its real-world applications. Let us begin by learning about this concept in brief.

The HR departments responsibilities with regards to recruitment used to be gathering and screening resumes, reaching out to candidates that fit the job description, lining up interviews and sending offer letters. It also includes managing a new employees on-boarding process and taking care of the exit process of an employee that decides to leave. Today, the human resource department is about all of this and much more. The department is now also expected to be able to predict employee attrition and candidate success, and this is possible through AI and machine learning in HR.

The objective behind integrating machine learning in human resource processes is the identification and automation of repetitive, time consuming tasks to free up the HR staff. By automating these processes, they can devote more time and resources to other imperative strategic projects and actual human interactions with prospective employees. ML is capable of efficiently handling the following HR roles, tasks and functions:

SEE ALSO:The Role of AI and Machine Learning in Affiliate Marketing

An HR professional keeps track of who saw the job posting and the job portal on which the applicant saw the posting. They collect the CVs and resumes of all the applicants and come up with a way to categorize the data in those documents. Additionally, they schedule, standardize and streamline the entire interview process. Moreover, they keep track of the social media activities of applicants along with other relevant data. All of this data collected by the HR professional is fed into a machine learning HR software from the first day itself. Soon enough, HR analytics in machine learning begins analyzing the data fed to discover and display insights and patterns.

The opportunities of learning through insights provided by machine learning HR are endless. The software helps HR professionals discover things like which interviewer is better at identifying the right candidate and which job portal or job posting attracts more or quality applicants.

With HR analytics and machine learning, fine-tuning and personalization of training is possible which makes the training experience more relevant to the freshly hired employee. It helps in identifying knowledge gaps or loopholes in training early on. It can also become a useful resource for company-related FAQs and information like company policies, code of conduct, benefits and conflict resolution.

The best way to better understand how machine learning has made HR processes more efficient is by getting acquainted with the real world applications of this technology. Let us have a look at some applications below.

SEE ALSO:The Importance of Human Resources Analytics

Scheduling is generally a time-demanding task. It includes coordinating with candidates and scheduling interviews, enhancing the onboarding experience, calling the candidates for follow-ups, performance reviews, training, testing and answering the common HR queries. Automating these tedious processes is one of the first applications of machine learning in human resource. ML takes away the burden of these cumbersome tasks from the HR staff by streamlining and automating it which frees up their time to focus on bigger issues at hand.A few of the best recruitment scheduling software are Beamery, Yello and Avature.

Once an HR professional is informed about the kind of talent that is needed to be hired in a company, one challenge is letting this information out and attracting the right set of candidates that might be fit for the role. Huge amount of companies trust ML for this task. Renowned job search platforms like LinkedIn and Glassdoor use machine learning and intelligent algorithms to help HR professionals filter and find out the best suitable candidates for the job.

Machine learning in human resources is also used to track new and potential applicants as they come into the system. A study was conducted by Capterra to look at how the use of recruitment software or applicant tracking software helped recruiters. It found 75% of the recruiters they contacted used some form of recruitment or applicant tracking software with 94% agreeing that it improved their hiring process. It further found that just 5% of recruiters thought that using applicant tracking software had a negative impact on their company [3].

Using such software also gives the HR professional access to predictive analytics which helps them analyze if the person would be best suitable for the job and a good fit for the company. Some of the best applicant tracking software that are available in the market are Pinpoint, Greenhouse and ClearCompany.

If hiring an employee is difficult, retaining an employee is even more challenging. There are factors in a company that make an employee stay or move to their next job. A study which was conducted by Gallup asked employees from different organizations if theyd leave or stay if certain perks were provided to them. The study found that 37% would quit their present job and take up a new job thatll allow them to work remotely part-time. 54% would switch for monetary bonuses, 51% for flexible working hours and 51% for employers offering retirement plans with pensions [4]. Though employee retention depends on various factors, it is imperative for an HR professional to understand, manage and predict employee attrition.

Machine learning HR tools provide valuable data and insights into the above mentioned factors and help HR professionals make decisions regarding employing someone (or not) more efficiently. By understanding this data about employee turnover, they are in a better position to take corrective measures well in advance to eliminate or minimize the issues.

An engaged employee is one who is involved in, committed to and enthusiastic about their work and workplace. The State of the Global Workforce report by Gallop found that 85% of the employees in the workplace are disengaged. Translation: Majority of the workforce views their workplace negatively or only does the bare minimum to get through the day, with little to no attachment to their work or workplace. The study further addresses why employee engagement is necessary. It found that offices with more engaged employees result in 10% higher customer metrics, 17% higher productivity, 20% more sales and 21% more profitability. Moreover, it found that highly engaged workplaces saw 41% less absenteeism [5].

Machine learning HR software helps the human resource department in making the employees more engaged. The insights provided by HR analytics by machine learning software help the HR team significantly in increasing employee productivity and reducing employee turnover rates. Software from Workometry and Glint aids immeasurable in measuring, analyzing and reporting on employee engagement and the general feeling towards their work.

The applications of machine learning in human resources we read above are already in use by HR professionals across the globe. Though the human element from human resources wont completely disappear, machine learning can guide and assist HR professionals substantially in ensuring the various functions of this department are well aligned and the strategic decisions made on a day-to-day basis are more accurate.

These are definitely exciting times for the HR industry and it is crucial that those working in this department are aware of the existing cutting-edge solutions available and the new trends that continue to develop.

The automation of HR functions like hiring & recruitment, training, development and retention has already made a profound positive effect on companies. Companies that refuse to or are slow to adapt and adopt machine learning and other new technologies will find themselves at a competitive disadvantage while those embrace them happily will flourish.

SEE ALSO:Future of Human Resource Management: HR Tech Trends of 2019

For more updates and latest tech news, keep reading iTMunch

Sources

[1] Glassdoor (2015) Why is Hiring Taking Longer, New Insights from Glassdoor Data [Online] Available from: https://www.glassdoor.com/research/app/uploads/sites/2/2015/06/GD_Report_3-2.pdf [Accessed December 2020]

[2] [Society for Human Resource Management (2016) 2016 Human Capital Benchmarking Report [Online] Available from: https://www.ebiinc.com/wp-content/uploads/attachments/2016-Human-Capital-Report.pdf [Accessed December 2020]

[3] Capterra (2015) Recruiting Software Impact Report [Online] Available from: https://www.capterra.com/recruiting-software/impact-of-recruiting-software-on-businesses [Accessed December 2020]

[4] Gallup (2017) State of the American Workplace Report [Online] Available from: https://www.gallup.com/workplace/238085/state-american-workplace-report-2017.aspx [Accessed December 2020]

[5] Gallup (2017) State of the Global Workplace [Online] Available from: https://www.gallup.com/workplace/238079/state-global-workplace-2017.aspx#formheader [Accessed December 2020]

Image Courtesy

Image 1: Background vector created by starline http://www.freepik.com

Image 2: Business photo created by yanalya http://www.freepik.com

Read this article:
Machine learning in human resources: how it works & its real-world applications - iTMunch

Supporting Content Decision Makers With Machine Learning Machine Learning Times – The Predictive Analytics Times

By: Melody Dye, Chaitanya Ekanadham, Avneesh Saluja, Ashish RastogiOriginally published in The Netflix Tech Blog, Dec 10, 2020.

Netflix is pioneering content creation at an unprecedented scale. Our catalog of thousands of films and series caters to 195M+ members in over 190 countries who span a broad and diverse range of tastes. Content, marketing, and studio production executives make the key decisions that aspire to maximize each series or films potential to bring joy to our subscribers as it progresses from pitch to play on our service. Our job is to support them.

The commissioning of a series or film, which we refer to as a title, is a creative decision. Executives consider many factors including narrative quality, relation to the current societal context or zeitgeist, creative talent relationships, and audience composition and size, to name a few. The stakes are high (content is expensive!) as is the uncertainty of the outcome (it is difficult to predict which shows or films will become hits). To mitigate this uncertainty, executives throughout the entertainment industry have always consulted historical data to help characterize the potential audience of a title using comparable titles, if they exist. Two key questions in this endeavor are:

The increasing vastness and diversity of what our members are watching make answering these questions particularly challenging using conventional methods, which draw on a limited set of comparable titles and their respective performance metrics (e.g., box office, Nielsen ratings). This challenge is also an opportunity. In this post we explore how machine learning and statistical modeling can aid creative decision makers in tackling these questions at a global scale. The key advantage of these techniques is twofold. First, they draw on a much wider range of historical titles (spanning global as well as niche audiences). Second, they leverage each historical title more effectively by isolating the components (e.g., thematic elements) that are relevant for the title in question.

To continue reading this article, click here.

Read more:
Supporting Content Decision Makers With Machine Learning Machine Learning Times - The Predictive Analytics Times

4 tips to upgrade your programmatic advertising with Machine Learning – Customer Think

Lomit Patel, VP of growth at IMVU and best-selling author of Lean AI, shares lessons learned and practical advice for app marketers to unlock open budgets and sustainable growth with machine learning.

The first step in the automation journey is to identify where you and your team stand. In his book Lean AI: How Innovative Startups Use Artificial Intelligence to Grow, Lomit introduces the Lean AI Autonomy Scale, which ranks companies from 0 to 5 based on their level of AI & automation adoption.

A lot of companies arent fully relying on AI and automation to power their growth strategies. In fact, on a Lean AI Autonomy Scale from 0 to 5, most companies are at stage 2 or 3, where they rely on the AI of some of their partners without fully garnering the potential of these tools.

Heres how app marketers can start working their way up to level 5:

Put your performance strategy to the test by setting the right indicators. Marketers KPIs should be geared towards measuring growth. Identify the metrics that show whats driving more user quality conversions and revenue, such as:

Analyzing data is a critical step towards measuring success through the right KPIs. When getting data ready to be automated and processed with AI, marketers should make sure:

The better the data, the more effective decisions it will allow you to take. By aggregating data, marketers gain a comprehensive view of their efforts, which in turn leads to a better understanding of success metrics.

Youve got to make sure that youre giving them [partners] the right data so that their algorithms can optimize towards your outcomes and clearly define what success is. Lomit Patel.

The role of AI is not to replace jobs or people, but to replace tasks that people do, letting them focus on the things they are good at.

With Lean AI, the machine does a lot of the heavy lifting, allowing marketers to process data and surface insights in a way that wasnt possible beforeand with more data, the accuracy rate continues to go up.

It can be used to:

With our AI machine, were constantly testing different audiences, creatives, bids, budgets, and moving all of those different dials. On average, were generally running about ten thousand experiments at scale. A majority of those are based on creatives, its become a much bigger lever for us. Lomit Patel.

Theres a reason why growth partners have been around for a long time. For a lot of companies, the hassle of taking all marketing operations in-house doesnt make sense. At first, building a huge in-house data science team might seem like a great way to start leveraging AIbut:

Performance partners bring experience from working with multiple players across a number of verticals, making it easier to identify and implement the most effective automation strategy for each marketer. Their knowledge about industry benchmarks and best practices goes a long way in helping marketers outscore their competitors.

Last but not least, once you find the right partners, set them up for success by sharing the right data.

These recommendations are the takeaways from the first episode of App Marketers Unplugged. Created by Jampp, this video podcast series connects industry leaders and influencers to discuss challenges and trends with their peers.

Watch the full App Marketers Unplugged session with Lomit Patel to learn more about how Lean AI can help you gain users insights more efficiently and what marketers need to sail through the automation journey.

Read more here:
4 tips to upgrade your programmatic advertising with Machine Learning - Customer Think

What is machine learning? Here’s what you need to know – Business Insider – Business Insider

Machine learning is a fast-growing and successful branch of artificial intelligence. In essence, machine learning is the process of allowing a computer system to teach itself how to perform complex tasks by analyzing large sets of data, rather than being explicitly programmed with a particular algorithm or solution.

In this way, machine learning enables a computer to learn how to perform a task on its own and to continue to optimize its approach over time, without direct human input.

In other words, it's the computer that is creating the algorithm, not the programmers, and often these algorithms are sufficiently complicated that programmers can't explain how the computer is solving the problem. Humans can't trace the computer's logic from beginning to end; they can only determine if it's finding the right solution to the assigned problem, which is output as a "prediction."

There are several different approaches to training expert systems that rely on machine learning, specifically "deep" learning that functions through the processing of computational nodes. Here are the most common forms:

Supervised learning is a model in which computers are given data that has already been structured by humans. For example, computers can learn from databases and spreadsheets in which the data has already been organized, such as financial data or geographic observations recorded by satellites.

Unsupervised learning uses databases that are mostly or entirely unstructured. This is common in situations where the data is collected in a way that humans can't easily organize or structure it. A common example of unstructured learning is spam detection, in which a computer is given access to enormous quantities of emails and it learns on its own to distinguish between wanted and unwanted mail.

Reinforcement learning is when humans monitor the output of the computer system and help guide it toward the optimal solution through trial and error. One way to visualize reinforcement learning is to view the algorithm as being "rewarded" for achieving the best outcome, which helps it determine how to interpret its data more accurately.

The field of machine learning is very active right now, with many common applications in business, academia, and industry. Here are a few representative examples:

Recommendation engines use machine learning to learn from previous choices people have made. For example, machine learning is commonly used in software like video streaming services to suggest movies or TV shows that users might want to watch based on previous viewing choices, as well as "you might also like" recommendations on retail sites.

Banks and insurance companies rely on machine learning to detect and prevent fraud through subtle signals of strange behavior and unexpected transactions. Traditional methods for flagging suspicious activity are usually very rigid and rules-based, which can miss new and unexpected patterns, while also overwhelming investigators with false positives. Machine learning algorithms can be trained with real-world fraud data, allowing the system to classify suspicious fraud cases far more accurately.

Inventory optimization a part of the retail workflow is increasingly performed by systems trained with machine learning. Machine learning systems can analyze vast quantities of sales and inventory data to find patterns that elude human inventory planners. These computer systems can make more accurate probability forecasting for customer demand.

Machine automation increasingly relies on machine learning. For example, self-driving car technology is deeply indebted to machine learning algorithms for the ability to detect objects on the road, classify those objects, and make accurate predictions about their potential movement and behavior.

View post:
What is machine learning? Here's what you need to know - Business Insider - Business Insider

U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations – BroadbandBreakfast.com

December 11, 2020 In todays digital environment, winning wars requires more than boots on the ground. It also requires computer algorithms and artificial intelligence.

The United States Special Operations Command is currently playing a critical role advancing the employment of AI and machine learning in the fight against the countrys current and future advisories, through Project Maven.

To discuss the initiatives taking place as part of the project, General Richard Clarke, who currently serves as the Commander of USSOCOM, and Richard Shultz, who has served as a security consultant to various U.S. government agencies since the mid-1980s, joined the Hudson Institute for a virtual discussion on Monday.

Among other objectives, Project Maven aims to develop and integrate computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that the Department of Defense collects every day in support of counterinsurgency and counter terrorism operation, according to Clarke.

When troops carry out militarized site exploration, or military raids, they bring back copious amounts of computers, papers, and hard drives, filled with potential evidence. In order to manage enormous quantities of information in real time to achieve strategic objectives, the Algorithmic Warfare Cross-Function task force, launched in April 2017, began utilizing AI to help.

We had to find a way to put all of this data into a common database, said Clarke. Over the last few years, humans were tasked with sorting through this content watching every video, and reading every detainee report. A human cannot sort and shift through this data quickly and deeply enough, he said.

AI and machine learning have demonstrated that algorithmic warfare can aid military operations.

Project Maven initiatives helped increase the frequency of raid operations from 20 raids a month to 300 raids a month, said Schultz. AI technology increases both the number of decisions that can be made, and the scale. Faster more effective decisions on your part, are going to give enemies more issues.

Project Maven initiatives have increased the accuracy of bomb targeting. Instead of hundreds of people working on these initiatives, today it is tens of people, said Clarke.

AI has also been used to rival adversary propaganda. I now spend over 70 percent of my time in the information environment. If we dont influence a population first, ISIS will get information out more quickly, said Clarke.

AI and machine learning tools, enable USSOCOM to understand what an enemy is sending and receiving, what are false narratives, what are bots, and more, the detection of which allows decision makers to make faster, and more accurate, calls.

Military use of machine learning for precision raids and bomb strikes naturally raises concerns. In 2018, more than 3,000 Google employees signed a petition in protest against the companys involvement with Project Maven.

In an open letter addressed to CEO Sundar Pichai, Google employees expressed concern that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. We believe that Google should not be in the business of war, the letter read.

Go here to read the rest:
U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations - BroadbandBreakfast.com

Information gathering: A WebEx talk on machine learning – Santa Fe New Mexican

Were long past the point of questioning whether machines can learn. The question now is how do they learn? Machine learning, a subset of artificial intelligence, is the study of computer algorithms that improve automatically through experience. That means a machine can learn, independent of human programming. Los Alamos National Laboratory staff scientist Nga Thi Thuy Nguyen-Fotiadis is an expert on machine learning, and at 5:30 p.m. on Monday, Dec. 14, she hosts the virtual presentation Deep focus: Techniques for image recognition in machine learning, as part of the Bradbury Science Museums (1350 Central Ave., Los Alamos, 505-667-4444, lanl.gov/museum) Science on Tap lecture series. Nguyen-Fotiadis is a member of LANLs Information Sciences Group, whose Computer, Computational, and Statistical Sciences division studies fields that are central to scientific discovery and innovation. Learn about the differences between LANLs Trinity supercomputer and the human brain, and how algorithms determine recommendations for your nightly viewing pleasure on Netflix and the like. The talk is a free WebEx virtual event. Follow the link from the Bradburys event page at lanl.gov/museum/events/calendar/2020/12 /calendar-sot-nguyen-fotaidis.php to register.

Here is the original post:
Information gathering: A WebEx talk on machine learning - Santa Fe New Mexican

LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -…

SANTA CLARA, Calif.--(BUSINESS WIRE)--LeanTaaS, Inc., a Silicon Valley software innovator that increases patient access and transforms operational performance for healthcare providers, announced a $130 million Series D funding round led by Insight Partners with participation from Goldman Sachs. The funds will be used to invest in building out the existing suite of products (iQueue for Operating Rooms, iQueue for Infusion Centers and iQueue for Inpatient Beds) as well as scaling the engineering, product and GoToMarket teams, and expanding the iQueue platform to include new products.

LeanTaaS is uniquely positioned to help hospitals and health systems across the country face the mounting operational and financial pressures exacerbated by the coronavirus. This funding will allow us to continue to grow and expand our impact while helping healthcare organizations deliver better care at a lower cost, said Mohan Giridharadas, founder and CEO of LeanTaaS. Our company momentum over the past several years - including greater than 50% revenue growth in 2020 and negative churn despite a difficult macro environment - reflects the increasing demand for scalable predictive analytics solutions that optimize how health systems increase operational utilization and efficiency. It also highlights how weve been able to develop and maintain deep partnerships with 100+ health systems and 300+ hospitals in order to keep them resilient and agile in the face of uncertain demand and supply conditions.

With this investment, LeanTaaS has raised more than $250 million in aggregate, including more than $150 million from Insight Partners. As part of the transaction, Insight Partners Jeff Horing and Jon Rosenbaum and Goldman Sachs Antoine Munfa will join LeanTaaS Board of Directors.

Healthcare operations in the U.S. are increasingly complex and under immense pressure to innovate; this has only been exacerbated by the prioritization of unique demands from the current pandemic, said Jeff Horing, co-founder and Managing Director at Insight Partners. Even under these unprecedented circumstances, LeanTaaS has demonstrated the effectiveness of its ML-driven platform in optimizing how hospitals and health systems manage expensive, scarce resources like infusion center chairs, operating rooms, and inpatient beds. After leading the companys Series B and C rounds, we have formed a deep partnership with Mohan and team. We look forward to continuing to help LeanTaaS scale its market presence and customer impact.

Although health systems across the country have invested in cutting-edge medical equipment and infrastructure, they cannot maximize the use of such assets and increase operational efficiencies to improve their bottom lines with human based scheduling or unsophisticated tools. LeanTaaS develops specialized software that increases patient access to medical care by optimizing how health systems schedule and allocate the use of expensive, constrained resources. By using LeanTaaS product solutions, healthcare systems can harness the power of sophisticated, AI/ML-driven software to improve operational efficiencies, increase access, and reduce costs.

We continue to be impressed by the LeanTaaS team. As hospitals and health systems begin to look toward a post-COVID-19 world, the agility and resilience LeanTaaS solutions provide will be key to restoring and growing their operations, said Antoine Munfa, Managing Director of Goldman Sachs Growth.

LeanTaaS solutions have now been deployed in more than 300 hospitals across the U.S., including five of the 10 largest health networks and 12 of the top 20 hospitals in the U.S. according to U.S. News & World Report. These hospitals use the iQueue platform to optimize capacity utilization in infusion centers, operating rooms, and inpatient beds. iQueue for Infusion Centers is used by 7,500+ chairs across 300+ infusion centers including 70 percent of the National Comprehensive Cancer Network and more than 50 percent of National Cancer Institute hospitals. iQueue for Operating Rooms is used by more than 1,750 ORs across 34 health systems to perform more surgical cases during business hours, increase competitiveness in the marketplace, and improve the patient experience.

I am excited about LeanTaaS' continued growth and market validation. As healthcare moves into the digital age, iQueue overcomes the inherent deficiencies in capacity planning and optimization found in EHRs. We are very excited to partner with LeanTaaS and implement iQueue for Operating Rooms, said Dr. Rob Ferguson, System Medical Director, Surgical Operations, Intermountain Healthcare.

Concurrent with the funding, LeanTaaS announced that Niloy Sanyal, the former CMO at Omnicell and GE Digital, would be joining as its new Chief Marketing Officer. Also, Sanjeev Agrawal has been designated as LeanTaaS Chief Operating Officer in addition to his current role as the President. "We are excited to welcome Niloy to LeanTaaS. His breadth and depth of experience will help us accelerate our growth as the industry evolves to a more data driven way of making decisions," said Agrawal.

About LeanTaaSLeanTaaS provides software solutions that combine lean principles, predictive analytics, and machine learning to transform hospital and infusion center operations. The companys software is being used by over 100 health systems across the nation which all rely on the iQueue cloud-based solutions to increase patient access, decrease wait times, reduce healthcare delivery costs, and improve revenue. LeanTaaS is based in Santa Clara, California, and Charlotte, North Carolina. For more information about LeanTaaS, please visit https://leantaas.com/, and connect on Twitter, Facebook and LinkedIn.

About Insight PartnersInsight Partners is a leading global venture capital and private equity firm investing in high-growth technology and software ScaleUp companies that are driving transformative change in their industries. Founded in 1995, Insight Partners has invested in more than 400 companies worldwide and has raised through a series of funds more than $30 billion in capital commitments. Insights mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Across its people and its portfolio, Insight encourages a culture around a belief that ScaleUp companies and growth create opportunity for all. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.

About Goldman Sachs GrowthFounded in 1869, The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm. Goldman Sachs Merchant Banking Division (MBD) is the primary center for the firms long-term principal investing activity. As part of MBD, Goldman Sachs Growth is the dedicated growth equity team within Goldman Sachs, with over 25 years of investing history, over $8 billion of assets under management, and 9 offices globally.

LeanTaaS and iQueue are trademarks of LeanTaaS. All other brand names and product names are trademarks or registered trademarks of their respective companies.

Read the original here:
LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -...