Combining machine learning and nanopore construction creates an artificial intelligence nanopore for coronavirus detection – DocWire News

This article was originally published here

Nat Commun. 2021 Jun 17;12(1):3726. doi: 10.1038/s41467-021-24001-2.

ABSTRACT

High-throughput, high-accuracy detection of emerging viruses allows for the control of disease outbreaks. Currently, reverse transcription-polymerase chain reaction (RT-PCR) is currently the most-widely used technology to diagnose the presence of SARS-CoV-2. However, RT-PCR requires the extraction of viral RNA from clinical specimens to obtain high sensitivity. Here, we report a method for detecting novel coronaviruses with high sensitivity by using nanopores together with artificial intelligence, a relatively simple procedure that does not require RNA extraction. Our final platform, which we call the artificially intelligent nanopore, consists of machine learning software on a server, a portable high-speed and high-precision current measuring instrument, and scalable, cost-effective semiconducting nanopore modules. We show that artificially intelligent nanopores are successful in accurately identifying four types of coronaviruses similar in size, HCoV-229E, SARS-CoV, MERS-CoV, and SARS-CoV-2. Detection of SARS-CoV-2 in saliva specimen is achieved with a sensitivity of 90% and specificity of 96% with a 5-minute measurement.

PMID:34140500 | DOI:10.1038/s41467-021-24001-2

Read more from the original source:
Combining machine learning and nanopore construction creates an artificial intelligence nanopore for coronavirus detection - DocWire News

4 ways machine learning is fixing to finetune clinical nutrition AI in Healthcare – AI in Healthcare

1. Diet optimization. A machine learning model for predicting blood sugar levels after people eat a meal was significantly better at the task than conventional carbohydrate counting, the authors report. The algorithms creators used the tool to compose good (low glycemic) and bad (high glycemic) diets for 26 participants.

For the prediction arm, 83% of participants had significantly higher post-prandial glycemic response when consuming the bad diet than the good diet, Limketkai and colleagues note. This technology has since been commercialized with the Day Two mobile application on the front.

2. Food image recognition. A primary challenge in alerting dieters to likely nutritional values and risks going by photos snapped on smartphones is the sheer limitlessness of possible foods, the authors point out. An early neural-network model developed at UCLA by Limketkai and colleagues achieved impressive performance in training and validating 131 predefined food categories from more than 222,000 curated food images.

However, in a prospective analysis of real-world food items consumed in the general population, the accuracy plummeted to 0.26 and 0.49, respectfully, write the authors of the present paper. Future refinement of AI for food image recognition would, therefore, benefit on training models with a significantly broader diversity of food items that may have to be adapted to specific cultures.

3. Risk prediction. Machine learning algorithms beat out conventional techniques at predicting 10-year mortality related to cardiovascular disease in a densely layered analysis of the National Health and Nutrition Examination Survey (NHANES) and the National Death Index.

A conventional model based on proportional hazards, which included age, sex, Black race, Hispanic ethnicity, total cholesterol, high-density lipoprotein cholesterol, systolic blood pressure, antihypertensive medication, diabetes, and tobacco use appeared to significantly overestimate risk, Limketkai and co-authors comment. The addition of dietary indices did not change model performance, while the addition of 24-hour diet recall worsened performance. By contrast, the machine learning algorithms had superior performance than all [conventional] models.

Read the rest here:
4 ways machine learning is fixing to finetune clinical nutrition AI in Healthcare - AI in Healthcare

Improve Your Business’s Processes with Predictive Analytics and Machine Learning – Tech Wire Asia

In a digital age, it only takes a few years for research into cutting-edge areas like predictive analytics modelling and artificial intelligence to find practical uses in everyday business contexts. Areas like general usability, user interface, and semantics that are changed to empower a broader cross-section of potential users.

Business-focused users keen to leverage statistical methods may not be capable of or comfortable with interacting in a pure text terminal in Python or R, for example. After all, open-source code is exactly that: can you blame someone for not wanting to make an investment decision based on code found freely on the internet? Business users need an easy way to interface with powerful analytics and need a trusted brand that stands behind them. Thats exactly why Minitab has changed the game, making machine learning easy for everyone.

Today, business users do not have to compromise to do advanced analytics. For statistical analysis, predictive analytics, and machine learning, there is a 50-year-old powerhouse called Minitab.

Over the past couple of years, Minitab has revolutionized the market by bringing the worlds most advanced data gathering, processing, visualizations, and analysis to the masses. Most recently, Minitab broke the barrier to putting Data Science into the hands of business professionals. And unlike others who promised this in the past, Minitab delivers.

Thats because, across the companys product portfolio, there is a strong emphasis on usability and business outcomes Minitab is deployed by a broad cross-section of the business community.

For decision-makers tasked with a business process or operational improvements, access to data and the ability to use it to achieve clear goals is critical. The lifeblood of todays organizations is information, so using it to examine what was, what is, and what might happen can result in in lowered costs, higher revenues, and more efficient, timely actions for long-term strategy.

The talk may be of variables and predictors in mathematics and statistics for the businesss Change Manager, its data sources, outcomes, and results. The phraseology might be different, but the required data processing and analysis remain the same. Dont be intimidated by the term Machine Learning. All it refers to is learning from your data, which is effectively what data analysis is. Dont believe us? Try it yourself.

With Minitab at the core of your data operations, there is immediate plug and play access to hundreds of data sources via included connectors that allow companies to access the data silos, repositories and applications across the network and in the cloud. Its not surprising that Minitab was the highest-rated data integration tool according to Gartner Peer Insights.

Coupled with the statistical core of the Minitab data analysis platform, professionals can get a full picture of their data by leveraging archived information and real-time data streams as they happen in any organization.

Data is cleaned, transformed, and presented, providing the basis for predictive analytics modelling and insights into existing work processes.

As companies begin to scratch the surface of the data resources they have, hidden relationships between events and variables are uncovered. Factors that were never apparent can surface, even to the objective observer, and visual relationships and correlations emerge. Insights gained help both data specialists and line-of-business experts to determine how best to achieve the companys objectives.

For line-of-business managers and non-data scientists, the visual language of Minitab helps show and correlate the various factors at play. It allows them to create predicted outcomes to proposed changes to operations in safe modelling environments.

The beauty of the Minitab portfolio is its design for use in practical settings. The platforms openness and user interface mean it can be used in multiple verticals and unexpected use-cases: manufacturer Tate & Lyle used AI techniques and plotted thousands of variables to refine its sweetener consistency for better customer experience, for example. Where one might least expect it, Minitabs statistical power is creating change.

In a vast range of industries, advanced analytical modelling, analysis, and machine learning algorithms are being deployed by organizations to improve outcomes in thousands of scenarios. At one time, this type of statistical analysis was only seen in finance and high-end medical research and pharma, but not today. Minitab is making real, meaningful differences in thousands of settings.

It integrates with both cloud and on-premises applications and services, from marcomms to stream processors. Minitab installs locally or is now available as a SaaS, ready to be accessed from anywhere with an internet connection.

To learn more about the Minitab suite of offerings and begin leveraging its accessible power to affect change, start your journey here.

Read the rest here:
Improve Your Business's Processes with Predictive Analytics and Machine Learning - Tech Wire Asia

Data Insights and Machine Learning Take Charge of the Maritime Sales Process – Hellenic Shipping News Worldwide

While the maritime industry has been hesitant engaging in use of data insight and machine learning, the table is now about to turn. Today, an increasing number of maritime companies actively use data insights to improve sales, supply chain activities, and increase revenues among these the worlds largest ship supplier, Wrist Ship Supply.

The need for efficiency in the maritime sector has led companies to actively use data as a measure to optimize the supply chain. This has paved the way for new ship supply services centered around data insights and machine learning to increase top and bottom-line figures.

According to the leading data and analytics firm, GateHouse Maritime, data insights can make a noticeable difference in the maritime sector. With a combination of historic and real-time ocean data, machine learning, and smart algorithms, maritime supply companies can predict vessel destinations and arrivals with high precision.

Traditionally, vessel tracking has been a time consuming, manual process characterized by imprecise predictions and uncertainty. But today, the process can be automated and turn large amounts of data into tangible leads and sales:

With the help of data insights, it is possible to predict arrivals several days in advance with almost 100 percent accuracy. This allows maritime supply companies to obtain an obvious competitive advantage, as they can operate proactively and sell services to potential customers days before a given vessel calls into port, says CEO at GateHouse, Maritime, Martin Dommerby Kristiansen.

Data analytics strengthen the worlds largest ship supplierFour years ago, the worlds largest ship supplier, Wrist Ship Supply, realized a strategy that would integrate data analytics in numerous business areas. The global ship supplier is a full-service provider, providing service for marine, offshore and navy operations, such as supplying consumables, handling of owners goods and spare parts storage and forwarding.

Today, Wrist Ship Supply works strategically with data analytics and business intelligence to improve internal processes and increase value for customers:

In recent years, we have experienced an increasing pull from the market and as a market leader within ship supply, we feel obliged to take part in the digital transformation. Data analysis has proven to be a cornerstone and a very important tool for measuring and improving performances across our own as well as customers supply chain. Now, our business model is infused with data analytics and business intelligence that strengthen efficiency and reliability in both internal and external operations, explains Business Analysis Director at Wrist Ship Supply, Birthe Boysen.

For Birthe Boysen and Wrist Ship Supply, data analytics has especially proven its worth within sales:

It is crucial for us to know where potential customer vessels are heading and when they arrive in different ports. This allows us to coordinate our sales efforts and establish contact in advance. Not only does this make us more efficient, but it also creates value for customers, because all service activities can be planned several days ahead of arrival.

While the data-driven sales approach has increased the focus on KPIs, it has also become an important part of budgeting. Therefore, it has been a key priority for Wrist Ship Supply to be able to navigate in the ocean of available data:

We have an almost endless amount of data available, and it easily becomes exhausting to keep track of numbers and figures. Therefore, we prioritize to make sure that both internal and external stakeholders can make sense of the conclusions in our data insights. If employees or customers cannot fathom the overall lines in our data results, it will be difficult to use analytics in any way, Nadia Hay Kragholm, Senior Business Analyst in Wrist remarks.

According to Martin Dommerby Kristiansen, data insight has the potential to transform the entire maritime industry because efficiency has never been more important:

The maritime industry is indeed reliant on efficiency across the value chain. Recently, we have seen how a vessel stuck in the Suez Canal for only a few days can impact not only the maritime industry, but the entire transportation and logistics sector. This goes to show how important data insight and analytics can prove to be for companies that wish to operate proactively and minimize disorder in the supply chain.

GateHouse Maritime is a leader in Ocean Visibility solutions. We help global maritime service providers, cargo owners and logistic companies with transparent and accurate location data and predictions, cargo transport status, and offshore asset protection and surveillance. Our powerful maritime data foundation consists of 273 billion datapoints and +30 analysis and predictive models used for data-driven decisions by maritime operators worldwide. GateHouse Maritime is a subsidiary of GateHouse Holding, founded in 1992 and headquartered in Denmark, and which also holds the subsidiaries GateHouse SatCom and GateHouse Igniter.Source: GateHouse Maritime A/S

Read the original:
Data Insights and Machine Learning Take Charge of the Maritime Sales Process - Hellenic Shipping News Worldwide

Akamai Unveils Machine Learning That Intelligently Automates Application and API Protections and Reduces Burden on Security Professionals – PRNewswire

CAMBRIDGE, Mass., June 16, 2021 /PRNewswire/ -- Akamai Technologies, Inc. (NASDAQ: AKAM), the world's most trusted solution for protecting and delivering digital experiences, today announces platform security enhancements to strengthen protection for web applications, APIs, and user accounts. Akamai's machine learning derives insight on malicious activity from more than 1.3 billion daily client interactions to intelligently automate threat detections, time-consuming tasks, and security logic to help professionals make faster, more trustworthy decisions regarding cyberthreats.

In its May 9 report Top Cybersecurity Threats in 2021, Forrester estimates that due to reasons "exacerbated by COVID-19 and the resulting growth in digital interactions, identity theft and account takeover increased by at least 10% to 15% from 2019 to 2020." The leading global research and advisory firm notes that we should "anticipate another 8% to 10% increase in identity theft and ATO [account takeover] fraud in 2021." With threat actors increasingly using automation to compromise systems and applications, security professionals must likewise automate defenses in parallel against these attacks to manage cyberthreats at pace.

New Akamai platform security enhancements include:

Adaptive Security Engine for Akamai's web application and API protection (WAAP) solutions, Kona Site Defender and Web Application Protector, is designed to automatically adapt protections with the scale and sophistication of attacks, while reducing the effort to maintain and tune policies. The Adaptive Security Engine combines proprietary anomaly risk scoring with adaptive threat profiling to identify highly targeted, evasive, and stealthy attacks. The dynamic security logic intelligently adjusts its defensive aggressiveness based on threat intelligence automatically correlated for each customer's unique traffic. Self-tuning leverages machine learning, statistical models, and heuristics to analyze all triggers across each policy to accurately differentiate between true and false positives.

Audience Hijacking Protection has been added to Akamai Page Integrity Manager to detect and block malicious activity in real time from client-side attacks using JavaScript, advertiser networks, browser plug-ins, and extensions that target web clients. Audience Hijacking Protection is designed to use machine learning to quickly identify vulnerable resources, detect suspicious behavior, and block unwanted ads, pop-ups, affiliate fraud, and other malicious activities aimed at hijacking your audience.

Bot Score and JavaScript Obfuscation have been added to Akamai Bot Manager, laying the foundation for ongoing innovations in adversarial bot management, including the ability to take action against bots aligned with corporate risk tolerance. Bot Score automatically learns unique traffic and bot patterns, and self-tunes for long-term effectiveness; JavaScript Obfuscation dynamically changes detections to prevent bot operators from reverse engineering detections.

Akamai Account Protector is a new solution designed to proactively identify and block human fraudulent activity like account takeover attacks. Using advanced machine learning, behavioral analytics, and reputation heuristics, Account Protector intelligently evaluates every login request across multiple risk and trust signals to determine if it is coming from a legitimate user or an impersonator. This capability complements Akamai's bot mitigation to provide effective protection against both malicious human actors and automated threats.

"At Akamai, our latest platform release is intended to help resolve the tension between security and ease of use, with key capabilities around automation and machine learning specifically designed to intelligently augment human decision-making," said Aparna Rayasam, senior vice president and general manager, Application Security, Akamai. "Smart automation adds immediate value and empowers users with the right tools to generate insight and context to make faster and more trustworthy decisions, seamlessly all while anticipating what attackers might do next."

For more information about Akamai's Edge Security solutions, visit our Platform Update page.

About Akamai Akamai secures and delivers digital experiences for the world's largest companies. Akamai's intelligent edge platform surrounds everything, from the enterprise to the cloud, so customers and their businesses can be fast, smart, and secure. Top brands globally rely on Akamai to help them realize competitive advantage through agile solutions that extend the power of their multi-cloud architectures. Akamai keeps decisions, apps, and experiences closer to users than anyone and attacks and threats far away. Akamai's portfolio of edge security, web and mobile performance, enterprise access, and video delivery solutions is supported by unmatched customer service, analytics, and 24/7/365 monitoring. To learn why the world's top brands trust Akamai, visit http://www.akamai.com, blogs.akamai.com, or @Akamai on Twitter. You can find our global contact information at http://www.akamai.com/locations.

Contacts: Tim Whitman Media Relations 617-444-3019 [emailprotected]

Tom Barth Investor Relations 617-274-7130 [emailprotected]

SOURCE Akamai Technologies, Inc.

http://www.akamai.com

Read more from the original source:
Akamai Unveils Machine Learning That Intelligently Automates Application and API Protections and Reduces Burden on Security Professionals - PRNewswire

Machine learning security needs new perspectives and incentives – TechTalks

At this years International Conference on Learning Representations (ICLR), a team of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations. The attack, aptly named DeepSloth, targets adaptive deep neural networks, a range of deep learning architectures that cut down computations to speed up processing.

Recent years have seen growing interest in the security of machine learning and deep learning, and there are numerous papers and techniques on hacking and defending neural networks. But one thing made DeepSloth particularly interesting: The researchers at the University of Maryland were presenting a vulnerability in a technique they themselves had developed two years earlier.

In some ways, the story of DeepSloth illustrates the challenges that the machine learning community faces. On the one hand, many researchers and developers are racing to make deep learning available to different applications. On the other hand, their innovations cause new challenges of their own. And they need to actively seek out and address those challenges before they cause irreparable damage.

One of the biggest hurdles of deep learning the computational costs of training and running deep neural networks. Many deep learning models require huge amounts of memory and processing power, and therefore they can only run on servers that have abundant resources. This makes them unusable for applications that require all computations and data to remain on edge devices or need real-time inference and cant afford the delay caused by sending their data to a cloud server.

In the past few years, machine learning researchers have developed several techniques to make neural networks less costly. One range of optimization techniques called multi-exit architecture stops computations when a neural network reaches acceptable accuracy. Experiments show that for many inputs, you dont need to go through every layer of the neural network to reach a conclusive decision. Multi-exit neural networks save computation resources and bypass the calculations of the remaining layers when they become confident about their results.

In 2019, Yigitan Kaya, a Ph.D. student in Computer Science at the University of Maryland, developed a multi-exit technique called shallow-deep network, which could reduce the average inference cost of deep neural networks by up to 50 percent. Shallow-deep networks address the problem of overthinking, where deep neural networks start to perform unneeded computations that result in wasteful energy consumption and degrade the models performance. The shallow-deep network was accepted at the 2019 International Conference on Machine Learning (ICML).

Early-exit models are a relatively new concept, but there is a growing interest, Tudor Dumitras, Kayas research advisor and associate professor at the University of Maryland, told TechTalks. This is because deep learning models are getting more and more expensive computationally, and researchers look for ways to make them more efficient.

Dumitras has a background in cybersecurity and is also a member of the Maryland Cybersecurity Center. In the past few years, he has been engaged in research on security threats to machine learning systems. But while a lot of the work in the field focuses on adversarial attacks, Dumitras and his colleagues were interested in finding all possible attack vectors that an adversary might use against machine learning systems. Their work has spanned various fields including hardware faults, cache side-channel attacks, software bugs, and other types of attacks on neural networks.

While working on the deep-shallow network with Kaya, Dumitras and his colleagues started thinking about the harmful ways the technique might be exploited.

We then wondered if an adversary could force the system to overthink; in other words, we wanted to see if the latency and energy savings provided by early exit models like SDN are robust against attacks, he said.

Dumitras started exploring slowdown attacks on shallow-deep networks with Ionut Modoranu, then a cybersecurity research intern at the University of Maryland. When the initial work showed promising results, Kaya and Sanghyun Hong, another Ph.D. student at the University of Maryland, joined the effort. Their research eventually culminated into the DeepSloth attack.

Like adversarial attacks, DeepSloth relies on carefully crafted input that manipulates the behavior of machine learning systems. However, while classic adversarial examples force the target model to make wrong predictions, DeepSloth disrupts computations. The DeepSloth attack slows down shallow-deep networks by preventing them from making early exits and forcing them to carry out the full computations of all layers.

Slowdown attacks have the potential ofnegating the benefits ofmulti-exit architectures, Dumitras said.These architectures can halve the energy consumption of a deep neural network model at inference time, and we showed that for any input we can craft a perturbation that wipes out those savings completely.

The researchers findings show that the DeepSloth attack can reduce the efficacy of the multi-exit neural networks by 90-100 percent. In the simplest scenario, this can cause a deep learning system to bleed memory and compute resources and become inefficient at serving users.

But in some cases, it can cause more serious harm. For example, one use of multi-exit architectures involves splitting a deep learning model between two endpoints. The first few layers of the neural network can be installed on an edge location, such as a wearable or IoT device. The deeper layers of the network are deployed on a cloud server. The edge side of the deep learning model takes care of the simple inputs that can be confidently computed in the first few layers. In cases where the edge side of the model does not reach a conclusive result, it defers further computations to the cloud.

In such a setting, the DeepSloth attack would force the deep learning model to send all inferences to the cloud. Aside from the extra energy and server resources wasted, the attack could have much more destructive impact.

In a scenario typical for IoT deployments, where the model is partitioned between edge devices and the cloud, DeepSloth amplifies the latency by 1.55X, negating the benefits of model partitioning, Dumitras said. This could cause the edge device to miss critical deadlines, for instance in an elderly monitoring program that uses AI to quickly detect accidents and call for help if necessary.

While the researchers made most of their tests on deep-shallow networks, they later found that the same technique would be effective on other types of early-exit models.

As with most works on machine learning security, the researchers first assumed that an attacker has full knowledge of the target model and has unlimited computing resources to craft DeepSloth attacks. But the criticality of an attack also depends on whether it can be staged in practical settings, where the adversary has partial knowledge of the target and limited resources.

In most adversarial attacks, the attacker needs to have full access to the model itself, basically, they have an exact copy of the victim model, Kaya told TechTalks. This, of course, is not practical in many settings where the victim model is protected from outside, for example with an API like Google Vision AI.

To develop a realistic evaluation of the attacker, the researchers simulated an adversary who doesnt have full knowledge of the target deep learning model. Instead, the attacker has asurrogatemodel on which he tests and tunes the attack. The attacker thentransfers the attack to the actual target. The researchers trained surrogate models that have different neural network architectures, different training sets, and even different early-exit mechanisms.

We find that the attacker that uses a surrogate can still cause slowdowns (between 20-50%) in the victim model, Kaya said.

Such transfer attacks are much more realistic than full-knowledge attacks, Kaya said. And as long as the adversary has a reasonable surrogate model, he will be able to attack a black-box model, such as a machine learning system served through a web API.

Attacking a surrogate is effective because neural networks that perform similar tasks (e.g., object classification) tend to learn similar features (e.g., shapes, edges, colors), Kaya said.

Dumitras says DeepSloth is just the first attack that works in this threat model, and he believes more devastating slowdown attacks will be discovered. He also pointed out that, aside from multi-exit architectures, other speed optimization mechanisms are vulnerable to slowdown attacks. His research team tested DeepSloth on SkipNet, a special optimization technique for convolutional neural networks (CNN). Their findings showed that DeepSloth examples crafted for multi-exit architecture also caused slowdowns in SkipNet models.

This suggests thatthe two different mechanisms might share a deeper vulnerability, yet to be characterized rigorously, Dumitras said. I believe that slowdown attacks may become an important threat in the future.

The researchers also believe that security must be baked into the machine learning research process.

I dont think any researcher today who is doing work on machine learning is ignorant of the basic security problems. Nowadays even introductory deep learning courses include recent threat models like adversarial examples, Kaya said.

The problem, Kaya believes, has to do with adjusting incentives. Progress is measured on standardized benchmarks and whoever develops a new technique uses these benchmarks and standard metrics to evaluate their method, he said, adding that reviewers who decide on the fate of a paper also look at whether the method is evaluated according to their claims on suitable benchmarks.

Of course, when a measure becomes a target, it ceases to be a good measure, he said.

Kaya believes there should be a shift in the incentives of publications and academia. Right now, academics have a luxury or burden to make perhaps unrealistic claims about the nature of their work, he says. If machine learning researchers acknowledge that their solution will never see the light of day, their paper might be rejected. But their research might serve other purposes.

For example, adversarial training causes large utility drops, has poor scalability, and is difficult to get right, limitations that are unacceptable for many machine learning applications. But Kaya points out that adversarial training can have benefits that have been overlooked, such as steering models toward becoming more interpretable.

One of the implications of too much focus on benchmarks is that most machine learning researchers dont examine the implications of their work when applied to real-world settings and realistic settings.

Our biggest problem is that we treat machine learning security as an academic problem right now. So the problems we study and the solutions we design are also academic, Kaya says. We dont know if any real-world attacker is interested in using adversarial examples or any real-world practitioner in defending against them.

Kaya believes the machine learning community should promote and encourage research in understanding the actual adversaries of machine learning systems rather than dreaming up our own adversaries.

And finally, he says that authors of machine learning papers should be encouraged to do their homework and find ways to break their own solutions, as he and his colleagues did with the shallow-deep networks. And researchers should be explicit and clear about the limits and potential threats of their machine learning models and techniques.

If we look at the papers proposing early-exit architectures, we see theres no effort to understand security risks although they claim that these solutions are of practical value, he says. If an industry practitioner finds these papers and implements these solutions, they are not warned about what can go wrong. Although groups like ours try to expose potential problems, we are less visible to a practitioner who wants to use an early-exit model. Even including a paragraph about the potential risks involved in a solution goes a long way.

View original post here:
Machine learning security needs new perspectives and incentives - TechTalks

Relogix Announces Collaboration with Dr. Graham Wills, Predictive Analytics and Machine Learning Expert, To Better Predict Office Space Needs -…

Relogix will be the first in the industry to more accurately forecast and predict companies' real estate needs. Companies will potentially save hundreds of millions of real estate spend, year over year with this collaborative innovation between Relogix and Dr. Wills. "Relogix has a significant data set to work with, from years of collecting billions of terabytes of Corporate Real Estate data around the world," says Dr. Wills. "I'm excited to use this data and cutting-edge machine learning techniques to take spatial data research to the next level."

With the pandemic, it has become ever more difficult for companies to understand workplace demand for real estate, with everyone working from home and anywhere for the foreseeable future. As people return to the office, understanding the relationship between people and their demand for workspace is a significant challenge for workplace technology leaders in Corporate Real Estate, HR, and IT.

"We're making a significant R&D investment to further innovation around forecasting and predictive analytics for Corporate Real Estate," says Andrew Millar, Founder and CEO of Relogix. "We are excited to be working with Graham, a pre-eminent researcher in the AI field, and expect our collaboration to leverage advanced machine learning techniques to surface insights like never before."

As an outstanding data science leader for over 20 years, Wills is a disruptive innovator, who has been innovating predictive analytics and forecasting for 30 years. Hailing from IBM, Dr. Wills is a well-known researcher in the fields of spatial data exploration and time series monitoring. At IBM, Wills was the lead architect for predictive analytics and machine learning in IBM's Data and AI group, and led the development of major advances including intelligent automatic forecasting, natural language data insights, anomaly detection and key driver identification.

About Graham Wills, PhD:Graham's passion is analyzing data and designing capabilities that help others do the same with their data. His focus is on creating software systems that allow non-experts to draw conclusions safely and efficiently from predictive and machine learning models, and thus enhance the value of their data. Graham has authored over 60 publications, including a book in the Springer statistical series, and has chaired or presented at numerous international statistical and knowledge discovery conferences. His patents span visualization, spatial analysis, semantic knowledge, and associated AI domains. Graham believes that the goal of AI is to give professionals the assistance they need to make great decisions from their data, and that CRE is an ideal domain in which to introduce new AI and Machine Learning capabilities to revolutionize the marketplace.

About Andrew Millar, CEO:Andrew's mission is to turn data into valuable outcomes. With over 20 years as a corporate real estate solutions and insights provider, Relogix founder and CRE veteran, Andrew Millar, recognized the need for technology in the CRE industry. He founded Relogix out of a need to create solutions to help organizations evolve their workspace and get high quality data to drive strategic decision making. Andrew believes that the key to evolving workspace and strategic planning lies in data science. Just like the workplace, data science is progressive: it is a journey of perpetual discovery, refinement, and adaptation. Andrew has since created proprietary sensor technology with the needs of corporate real estate in mind technology created for CRE professionals by CRE professionals.

About Relogix:Trusted by top Corporate Real Estate professionals who need to make data-driven business decisions to inform their real estate strategy and measure impact. Our flexible workplace insights platform and state-of-the-art IoT occupancy sensors are proven to transform the workplace experience. We're always looking for the next innovation in workplace technology, leveraging two decades of CRE and analytics expertise to help our clients understand and optimize their global real estate portfolios.

SOURCE Relogix Inc.

See the original post:
Relogix Announces Collaboration with Dr. Graham Wills, Predictive Analytics and Machine Learning Expert, To Better Predict Office Space Needs -...

Company uses AWS, genomics and machine learning to develop a blood test for early cancer detection – TechRepublic

Hospitals and businesses use cloud computing, machine learning and voice-controlled devices to personalize healthcare for patients.

Image: 3dreams/Shutterstock

Personalizing healthcare requires the power of cloud computing whether the challenge is screening for cancer, reducing the paperwork load for doctors or making decisions about care, according to speakers at the AWS Healthcare and Life Sciences Virtual Symposium.

Wilson To, the worldwide head of healthcare at AWS, hosted the event at the end of May. To and four guests discussed how cloud services can improve information management to personalize healthcare.

Josh Ofman, chief medical officer for Grail, said that his company is using cloud computing to detect cancer at earlier stages when it is easier to treat. The Galleri test uses a blood test to screen for multiple cancers at once.

Ofman said that genomics and machine learning are the foundation of the new early detection test. The test looks for epigenetic changes in a person's DNA that can be a warning sign for mutations caused by cancer.

According to the company, the test has a false positive rate of less than 0.5% and a positive predictive value of 44%.

Grail recommends the Galleri test for people 50 and older who are at a higher risk of cancer. The company also suggests that the test be used in addition to other screenings, not as a replacement for existing procedures. The company claims that the test can identify more than 50 types of cancer ranging from Hodgkin and non-Hodgkin lymphoma, melanoma and soft tissue sarcoma.

Grail started working with AWS in 2017 to ingest and analyze hundreds of thousands of records and genomic datasets. Grail migrated its core processing and analytical infrastructure from on-premises to a cloud platform at that time. Grail uses storage, compute and network services from AWS.

"This collaboration is powering our growth and will enable us to get to scale," Ofman said.

SEE: Cloud data storage policy (TechRepublic Premium)

Ofman said the company's data set will grow by orders of magnitude as researchers process all the samples they have today.

"It will enable us to continue to refine our test and develop new products in new disease areas," he said.

According to the National Cancer Institute, the most common cancers in men are prostate, lung and colorectal cancers, which make up about 43% of cancers diagnosed in men in 2020. For women, the types that represented 50% of all cancer diagnoses in 2020 were breast, lung and colorectal.

The retail cost of the test is $949. According to the company, the test is not covered by insurance.

Three other AWS customers spoke at the event, including Biogen, Cambia Health Solutions and Houston Methodist Hospital. Laurent Rotival, chief information officer and senior vice president at Cambia Health Solutions, said his company uses AWS to bring together data streams from disparate sources to create a coherent experience for customers.

Alisha Alaimo, president of Biogen's U.S. organization, explained how the company worked with Us Against Alzheimer's to develop a screening test. The idea was to make the test feel more personalized and less intimidating.

The brain health test can be taken by an individual with concerns for herself, or by a caregiver who is worried about a loved one. The screening is at Mybrainguide.org and is anonymous and available in English and Spanish.

Roberta Schwartz, chief innovation officer and executive vice president of Houston Methodist Hospital, described the health system's work with Alexa and voice commands to improve patient care. Schwartz also sees a need for more personalized healthcare services, a trend that the pandemic intensified. The hospital system used these guidelines to revamp the patient experience: Help me now, make it easy and remember me.

Another goal of the project was to let doctors have more face time than screen time when working with patients.

The hospital has Amazon Echos in every room and Schwartz said she has seen a new level of acceptance of the devices among patients and doctors.

"The devices were essential when patients couldn't have visitors," she said. "We are planning to hook our Alexas up to the nurse call system as well."

The hospital also plans to use the devices to reduce the time doctors have to spend transcribing patient information and to make it easier to pull up relevant information during a patient consultation.

During a 34-week pilot program, the hospital deployed 1,200 devices in its facilities and saw more than 600 daily interactions with Alexa and Avia, a virtual health assistant. Requests for music were the most popular request at 75% followed by knowledge searches, socializing, inquiries about the weather and general communication.

This is your go-to resource for XaaS, AWS, Microsoft Azure, Google Cloud Platform, cloud engineering jobs, and cloud security news and tips. Delivered Mondays

Read more:
Company uses AWS, genomics and machine learning to develop a blood test for early cancer detection - TechRepublic

Adversarial attacks in machine learning: What they are and how to stop them – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Adversarial machine learning, a technique that attempts to fool models with deceptive data, is a growing threat in the AI and machine learning research community. The most common reason is to cause a malfunction in a machine learning model. An adversarial attack might entail presenting a model with inaccurate or misrepresentative data as its training, or introducing maliciously designed data to deceive an already trained model.

As the U.S. National Security Commission on Artificial Intelligences 2019 interim report notes, a very small percentage of current AI research goes toward defending AI systems against adversarial efforts. Some systems already used in production could be vulnerable to attack. For example, by placing a few small stickers on the ground, researchers showed that they could cause a self-driving car to move into the opposite lane of traffic. Other studies have shown that making imperceptible changes to an image can trick a medical analysis system into classifying a benign mole as malignant, and that pieces of tape can deceive a computer vision system into wrongly classifying a stop signas a speed limit sign.

The increasing adoption of AI is likely to correlate with a rise in adversarial attacks. Its a never-ending arms race, but fortunately, effective approaches exist today to mitigate the worst of the attacks.

Attacks against AI models are often categorized along three primary axes influence on the classifier, the security violation, and their specificity and can be further subcategorized as white box or black box. In white box attacks, the attacker has access to the models parameters, while in black box attacks, the attacker has no access to these parameters.

An attack can influence the classifier i.e., the model by disrupting the model as it makes predictions, while a security violation involves supplying malicious data that gets classified as legitimate. A targeted attack attempts to allow a specific intrusion or disruption, or alternatively to create general mayhem.

Evasion attacks are the most prevalent type of attack, where data are modified to evade detection or to be classified as legitimate. Evasion doesnt involve influence over the data used to train a model, but it is comparable to the way spammers and hackers obfuscate the content of spam emails and malware. An example of evasion is image-based spam in which spam content is embedded within an attached image to evade analysis by anti-spam models. Another example is spoofing attacks against AI-powered biometric verification systems..

Poisoning, another attack type, is adversarial contamination of data. Machine learning systems are often retrained using data collected while theyre in operation, and an attacker can poison this data by injecting malicious samples that subsequently disrupt the retraining process. An adversary might input data during the training phase thats falsely labeled as harmless when its actually malicious. For example, large language models like OpenAIs GPT-3 can reveal sensitive, private information when fed certain words and phrases, research has shown.

Meanwhile, model stealing, also called model extraction, involves an adversary probing a black box machine learning system in order to either reconstruct the model or extract the data that it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model stealing could be used to extract a proprietary stock-trading model, which the adversary could then use for their own financial gain.

Plenty of examples of adversarial attacks have been documented to date. One showed its possible to 3D-print a toy turtle with a texture that causes Googles object detection AI to classify it as a rifle, regardless of the angle from which the turtle is photographed. In another attack, a machine-tweaked image of a dog was shown to look like a cat to both computers and humans. So-called adversarial patterns on glasses or clothing have been designed to deceive facial recognition systems and license plate readers. And researchers have created adversarial audio inputs to disguise commands to intelligent assistants in benign-sounding audio.

In apaper published in April, researchers from Google and the University of California at Berkeley demonstrated that even the best forensic classifiers AI systems trained to distinguish between real and synthetic content are susceptible to adversarial attacks. Its a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric riseindeepfakecontent online.

One of the most infamous recent examples is Microsofts Tay, a Twitter chatbot programmed to learn to participate in conversation through interactions with other users. While Microsofts intention was that Tay would engage in casual and playful conversation, internet trolls noticed the system had insufficient filters and began feeding Tay profane and offensive tweets. The more these users engaged, the more offensive Tays tweets became, forcing Microsoft to shut the bot down just 16 hours after its launch.

As VentureBeat contributor Ben Dickson notes, recent years have seen a surge in the amount of research on adversarial attacks. In 2014, there were zero papers on adversarial machine learning submitted to the preprint server Arxiv.org, while in 2020, around 1,100 papers on adversarial examples and attacks were. Adversarial attacks and defense methods have also become a highlight of prominent conferences including NeurIPS, ICLR, DEF CON, Black Hat, and Usenix.

With the rise in interest in adversarial attacks and techniques to combat them, startups like Resistant AI are coming to the fore with products that ostensibly harden algorithms against adversaries. Beyond these new commercial solutions, emerging research holds promise for enterprises looking to invest in defenses against adversarial attacks.

One way to test machine learning models for robustness is with whats called a trojan attack, which involves modifying a model to respond to input triggers that cause it to infer an incorrect response. In an attempt to make these tests more repeatable and scalable, researchers at Johns Hopkins University developed a framework dubbed TrojAI, a set of tools that generate triggered data sets and associated models with trojans. They say that itll enable researchers to understand the effects of various data set configurations on the generated trojaned models and help to comprehensively test new trojan detection methods to harden models.

The Johns Hopkins team is far from the only one tackling the challenge of adversarial attacks in machine learning. In February, Google researchers released apaper describing a framework that either detects attacks or pressures the attackers to produce images that resemble the target class of images. Baidu, Microsoft, IBM, and Salesforce offer toolboxes Advbox, Counterfit, Adversarial Robustness Toolbox, and Robustness Gym for generating adversarial examples that can fool models in frameworks like MxNet, Keras, Facebooks PyTorch and Caffe2, Googles TensorFlow, and Baidus PaddlePaddle. And MITs Computer Science and Artificial Intelligence Laboratory recently released a tool called TextFoolerthat generates adversarial text to strengthen natural language models.

More recently, Microsoft, the nonprofit Mitre Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch releasedtheAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with Mitre to build a schema that organizes the approaches malicious actors employ in subverting machine learning models, bolstering monitoring strategies around organizations mission-critical systems.

The future might bring outside-the-box approaches, including several inspired by neuroscience. For example, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more robust to adversarial attacks. While adversarial AI is likely to become a never-ending arms race, these sorts of solutions instill hope that attackers wont always have the upper hand and that biological intelligence still has a lot of untapped potential.

Visit link:
Adversarial attacks in machine learning: What they are and how to stop them - VentureBeat

Machine learning is changing our culture. Try this text-altering tool to see how – The Conversation AU

Most of us benefit every day from the fact computers can now understand us when we speak or write. Yet few of us have paused to consider the potentially damaging ways this same technology may be shaping our culture.

Human language is full of ambiguity and double meanings. For instance, consider the potential meaning of this phrase: I went to project class. Without context, its an ambiguous statement.

Computer scientists and linguists have spent decades trying to program computers to understand the nuances of human language. And in certain ways, computers are fast approaching humans ability to understand and generate text.

Through the very act of suggesting some words and not others, the predictive text and auto-complete features in our devices change the way we think. Through these subtle, everyday interactions, machine learning is influencing our culture. Are we ready for that?

I created an online interactive work for the Kyogle Writers Festival that lets you explore this technology in a harmless way.

The field concerned with using everyday language to interact with computers is called natural language processing. We encounter it when we speak to Siri or Alexa, or type words into a browser and have the rest of our sentence predicted.

This is only possible due to vast improvements in natural language processing over the past decade achieved through sophisticated machine-learning algorithms trained on enormous datasets (usually billions of words).

Last year, this technologys potential became clear when the Generative Pre-trained Transformer 3 (GPT-3) was released. It set a new benchmark in what computers can do with language.

Read more: Can robots write? Machine learning produces dazzling results, but some assembly is still required

GPT-3 can take just a few words or phrases and generate whole documents of meaningful language, by capturing the contextual relationships between words in a sentence. It does this by building on machine-learning models, including two widely adopted models called BERT and ELMO.

However, there is a key issue with any language model produced by machine learning: they generally learn everything they know from data sources such as Wikipedia and Twitter.

In effect, machine learning takes data from the past, learns from it to produce a model, and uses this model to carry out tasks in the future. But during this process, a model may absorb a distorted or problematic worldview from its training data.

If the training data was biased, this bias will be codified and reinforced in the model, rather than being challenged. For example, a model may end up associating certain identity groups or races with positive words, and others with negative words.

This can lead to serious exclusion and inequality, as detailed in the recent documentary Coded Bias.

The interactive work I created allows people to playfully gain an intuition for how computers understand language. It is called Everything You Ever Said (EYES), in reference to the way natural language models draw on all kinds of data sources for training.

EYES allows you to take any piece of writing (less than 2000 characters) and subtract one concept and add another. In other words, it lets you use a computer to change the meaning of a piece of text. You can try it yourself.

Heres an example of the Australian national anthem subjected to some automated revision. I subtracted the concept of empire and added the concept of koala to get:

Australians all let us grieveFor we are one and freeWeve golden biota and abundance for poornessOur koala is girt by porpoiseOur wildlife abounds in primates koalasOf naturalness shiftless and rareIn primates wombat, let every koalaWombat koala fairIn joyous aspergillosis then let us vocalise,Wombat koala fair

What is going on here? At its core, EYES uses a model of the English language developed by researchers from Stanford University in the United States, called GLoVe (Global Vectors for Word Representation).

EYES uses GLoVe to change the text by making a series of analogies, wherein an analogy is a comparison between one thing and another. For instance, if I ask you: man is to king what woman is to? you might answer queen. Thats an easy one.

But I could ask a more challenging question such as: rose is to thorn what love is to? There are several possible answers here, depending on your interpretation of the language. When asked about these analogies, GLoVe will produce the responses queen and betrayal, respectively.

GLoVe has every word in the English language represented as a vector in a multi-dimensional space (of around 300 dimensions). A such, it can perform calculations with words, adding and subtracting words as if they were numbers.

The trouble with machine learning is that the associations being made between certain concepts remain hidden inside a black box; we cant see or touch them. Approaches to making machine learning models more transparent are a focus of much current research.

The purpose of EYES is to let you experiment with these associations in a more playful way, so you can develop an intuition for how machine learning models view the world.

Some analogies will surprise you with their poignancy, while others may well leave you bewildered. Yet, every association was inferred from a huge corpus of a few billion words written by ordinary people.

Models such as GPT-3, which have learned from similar data sources, are already influencing how we use language. Having entire news feeds populated by machine-written text is no longer the stuff of science fiction. This technology is already here.

And the cultural footprint of machine-learning models seems to only be growing.

Read more: GPT-3: new AI can write like a human but don't mistake that for thinking neuroscientist

Excerpt from:
Machine learning is changing our culture. Try this text-altering tool to see how - The Conversation AU