Bing: To Use Machine Learning; You Have To Be Okay With It Not Being Perfect – Search Engine Roundtable

Frdric Dubut of Bing said on Twitter that if you do use machine learning in production, like Bing does, you have to be okay with the results not being perfect. And not all applications are okay with not being perfect, but I assume search can be okay with it.

Frdric Dubut wrote "To use ML in production you also need to be comfortable with a model that *will* get it wrong occasionally." "There are many applications where the required precision is just 100% - and unfortunately quite a few of these are still using ML," he added. These applications that have to be perfect are often in the financial, health and other areas. Think about the software that helps the airplane fly or software that helps you transfer money from on place to another.

Here are the tweets within the context:

As you know, Bing uses a lot of machine learning in search - upwards of 90% or more.

Forum discussion at Twitter.

Read the rest here:

Bing: To Use Machine Learning; You Have To Be Okay With It Not Being Perfect - Search Engine Roundtable

IQVIA on the adoption of AI and machine learning – OutSourcing-Pharma.com

Artificial intelligence (AI) and machine learning (ML) have become central topics in the pharma industry in 2019. Greater levels of investment are being funneled in this direction and a greater number of partnerships have sprung up around the areas.

The potential in relation to the pharma industry have often centered around drug discovery. The potential is there for the technology to reduce the cost of developing a new drug, which has been estimated to be approximately $2-3bn (1.8-2.7bn).

As a result, a number of large pharma companies have signed partnership deals to unlock the promise of faster drug discovery, such as Pfizers deal with CytoReason and Novo Nordisks with e-therapeutics.

Wider than this, there is the potential to improve patient recruitment to clinical trials, another notorious stumbling block in drug development.

Outsourcing-Pharma (OSP) asked Yilian Yuan (YY), SVP of data science and advanced analytics at Iqvia, for analysis on how the pharma industry is approaching the opportunity provided by AI and ML so far, and how this is likely to develop over the coming years.

OSP: How would you characterize the pharma industrys adoption of AI, so far?

YY: In general, the pharma industry recognizes the value of AI/ML, and some pharma companies have made significant investments to build the infrastructure and talent pool necessary to bring AI/ML capabilities into the R&D and commercial sectors. However, implementation can be challenging. To overcome these challenges pharma companies should undertake the following steps:

Because of the many challenges, pharma has been slow to take up AI/ML. Extra effort will be needed for the industry to fully leverage AI/ML and to realize the positive impact on business and improved patient care.

OSP: How do you see further adoption of AI/ML for pharma in 2020 and beyond?

YY: I see more and more pharma companies taking various approaches to realize the value of AI/ML, and they fall into two categories:

We also see some companies taking a combination approach to get the benefits of both.

OSP: The industry generally has a reputation of being cautious when it comes to the adoption of new technologies what are the dangers of this when applied to AI/ML?

YY: The development and commercialization of innovative treatment options for the market is costly and competitive. AI/ML can leverage real-world data to innovate clinical trial design and execution, e.g., smart patient recruitment, and select sites that can quickly enroll patients. On the commercialization side, AI/ML enables proactive and precise engagements with health care providers and patients and the ability to identify patients with high risk of exacerbation or noncompliance with the trial regimen, which can trigger interventions by nurse educators.

Pharma companies that are slow to adopt AI/ML will be left behind in the race to bring new products to market and the right products to the right patients at the right time.

OSP: Are there are any particular areas of drug discovery where the technology can have the most impact?

YY: There are many areas where AI/ML will have a positive impact on drug discovery:

OSP: Are there any noteworthy industries that are leading the way in using this technology? What can the pharma industry learn from them?

YY: The automotive industry faces fierce competition and has leveraged AI/ML to do precision marketing on its websites, with tailored messages and select products for potential buyers. Perhaps pharma can learn from them and use AI/ML to develop personalized medicine to improve patient care.

Many industries use robotic process automation to automate processes like finance systems, which pharma could do as well. Pharma is a heavily regulated industry with many reporting documents generated for clinical trials and for product usage and adverse events. These documents must be translated into many languages. Tech companies have developed auto translation services with the help of AI/ML. Existing auto translation with AI/ML can be enhanced with pharma vocabulary to cut down on the cost of translating documents into multiple languages and increase the speed of this type of work.

Yilian Yuan leads a team of data scientists, statisticians and research experts to help clients address a broad range of business and industry issues. Dr. Yuan has an extensive background in applying econometric and statistical modeling, predictive modeling and machine learning, discrete choice modeling and quantitative market research, combined with patient-level longitudinal data to provide actionable insights for pharma clients to improve business performance.

Original post:

IQVIA on the adoption of AI and machine learning - OutSourcing-Pharma.com

Taking UX and finance security to the next level with IBM’s machine learning – The Paypers

Fraud Prevention and Online Authentication Report 2019/2020

Machine learning is a technology that has been with us for some time now. Sometimes understated or used just as a buzz word, we cannot deny its impact and benefits on the human life.

From personal assistants and social media advertising services to medical diagnosis, image processing, and financial prediction, this innovative technology impacts our everyday life and supports business decisions for some of the worlds leading companies. For instance, machine learning (ML) solutions could assist financial services institutions to predict financial transactions fraud or outcomes of investments. Furthermore, banks can apply machine learning models to create targeted upselling and cross selling marketing campaigns.

Usually, the common ML techniques applied involve dealing with large amounts of data that needs to be shared and prepared before the actual learning phase. However, compliance with privacy laws (e.g. GDPR in Europe, the Personal Data Protection Bill in India, etc.) requires that most of the data and the computation to be kept in a secure environment, usually in-house, and not outsourced to cloud or multi-tenant shared environments.

At the beginning of October 2019, IBM scientists have published a paper demonstrating how homomorphic encryption (HE) enabled a bank to run machine learning algorithms on their sensitive client data, while keeping it encrypted.

Towards a homomorphic machine learning Big Data pipeline in finance

As data management and data protection are top concerns for financial institutions, The Paypers has been closely watching this space and has spoken with Flavio Bergamaschi, IBM Senior Research Scientist and one of the scientists behind IBMs pilot to find more about the research.

Imagine what you could do if you could compute on encrypted data without ever decrypt it. This was the message that dominated Flavios presentation and opened a whole spectrum of possibilities, new scenarios about what we can do today or we're not even considering doing because we cannot share information.

Broadly speaking, homomorphic encryption (HE) enables us to do the processing of the data without giving access to the data, and this is technically done by computing on encrypted data. The technology promises to generally transform and disrupt how business is currently done in many industries such as, but not limited to, healthcare, medical sciences, and finance.

His explanation recalled an interview that we had in May 2019 with Michael Osborne, a Security Researcher at IBMs Zurich Research Laboratory, one year after GDPR was passed in Europe. Back then, Michael agreed that banks are left with a dilemma: on the one hand, if they do not have sufficient technologies for fraud detection they can be fined and, on the other hand, if they do it in such a way that there is a breach and there is a kind of a risk to data subjects, they can again be fined within GDPR law. So, at the end of the day, its all about how you can do it; but IBM researchers solved this puzzle, as homomorphic encryption (HE) allows us to resolve the paradox of need to know vs. need to share.

The beginnings of homomorphic encryption

The first fully homomorphic encryption scheme was invented in 2009 by Craig Gentry. Going through the chronology of HE, Flavio explained that Gentrys invention described an encryption scheme that supports both multiplication and addition operations that can be used to perform arbitrary computation. Before this technology was developed, one could do either one or the other, but not both. But how long did it take to do one multiplication of one bit back in 2009? Flavios reply came disappointingly: the performance predictions were disappointing to the point that it was branded as "not in my lifetime". However, 10 years later and after many algorithmic improvements, the performance today is very adequate for many use cases where keeping the privacy and confidentiality of the data is paramount...

When it comes to real life applications, the engineering team started developing use cases for genomics (finding similarities between two genomic sequences, predicting a genetic predisposition to a specific condition or disease), oblivious queries (perform queries without revealing the query data), private set intersections (finding intersections of data without revealing anything more than the intersection), and prediction models for finance (investments, risk score determination).

In 2019, IBM managed to reduce the speed of the homomorphic computation, making it orders of magnitude faster than it was believed before.

How computing is done today

To stress the breakthrough of the research, Flavio demonstrated how computing is done today using a diagram that involves data exchange between two entities: Alice and Bob, plus Eve trying to eavesdrop the communication.

When Alice needs some service from an entity which we call Bob, it will encrypt the data when the data is in storage or when it's in transmission, to prevent Eve from grabbing unprotected data. Still, even if Eve steals that data, she is going to take it in an encrypted form. But Bob needs to decrypt the data in order to do anything with it.

I guess I seemed a bit puzzled by his diagram, so Flavio came up with a real life example when you buy something from an online shopping site you send your credit card details, and, most of the time, the details go to the site through an encrypted channel. But, when it gets to the source, the service needs to decrypt that info to process your order. This is the honest, but curious threat model. Because it's honest what the service is proposing to do for you, i.e. process a payments/transaction, but is curious as it wants to learn/extract information from your data.

With homomorphic encryption this model is changed because now the entity that provides the service, Bob, not only cannot see the data, but he doesnt have the ability to decrypt that data either, because he doesn't have the key. Nevertheless, he can still compute on that data and provide the service that he proposed to provide.

Shift in the security paradigm

Both Flavio and I agreed that security is crucial and protecting data privacy has become a major concern for users, and companies need to be careful when handling data.

Before homomorphic encryption was discovered, you would first implement the business logic of the application, and then the security team would build walls around it, to protect it. Data would be encrypted for storage in the disc or when it was transmitted but would have to be decrypted whenever you needed to do something with it he added.

Homomorphic encryption changed the picture because now, the cryptography is entangled with the business logic and we can have the data always encrypted while at rest/storage, transmission, and even while we are computing it.

The finance opportunity

Financial organisations have so many different departments. For instance, a bank could have a retail banking part, loans, investments, insurance, health insurance, etc. This translates into a lot of information, which due to privacy legislation such as GDPR, antitrust or anti-competitive business legislation, may not be combined by analysts in a clear form, as there is too much risk for data exfiltration and leaks. If all this data is encrypted, and computation can still be performed, without accessing that data, there is a lesser risk if the data leaks because it is encrypted. Only the machine, without accessing the data in the clear, can perform computations such as running models to analyse and predict data for marketing, fraud detection, loans, financial health of the account holder, and be able to offer services.

By using HE encrypted models and data, IBM team demonstrated the feasibility of predicting if a customer might need a personal loan soon, enabling targeted marketing. Typically, this is done behind a firewall in a segregated environment Flavio explained, limiting a bank to only using machine learning tools and resources built or installed in-house. Homomorphic encryption can successfully be used to protect the privacy and confidentiality of data used both in the creation of predictive models and running predictions theoretically freeing the bank to safely outsource sensitive data to a hybrid and/or public cloud for analysis with peace of mind.

Finally, I got fully convinced. Lets say you are looking to make an investment with a bank, and you want to make that in a way that you dont want to reveal with your bank what sort of volumes you might want to invest. In this case, the bank could deploy machine learning models on your encrypted data that will predict the risk for investment or returns, and offer you a service/offer, which you might accept it or not.

I would like to thank Flavio and the whole IBM team for an insightful presentation on homomorphic encryption, and what other best way to conclude than to quote him: Imagine what you could do if you could compute on encrypted data without ever decrypt it. Feel free to share your thoughts with us at mirelac@thepaypers.com .

About Flavio Bergamaschi

Flavio Bergamaschi is a Senior Research Scientist and currently the leader of the group developing IBM's Fully Homomorphic Encryption (FHE) technology for robustness, serviceability and usability, and designing and developing real world FHE applications. He also represents IBM in the industry-wide homomorphic encryption standards.

His areas of expertise include cryptography, distributed systems (MIMD & SIMD), signal processing and machine learning.

About Mirela Ciobanu

Mirela Ciobanu is a Senior Editor at The Paypers and has been actively involved in covering digital payments and related topics, especially in the cryptocurrency, online security and fraud prevention space. She is passionate about finding the latest news on data breaches, machine learning, digital identity, blockchain, and she is an active advocate of the need to keep our online data/presence protected. Mirela has a bachelor degree in English language and holds a Masters degree in Marketing.

Excerpt from:

Taking UX and finance security to the next level with IBM's machine learning - The Paypers

Government invests 49m in data analytics, machine learning and AI Ireland, news for Ireland, FDI,Ireland,Technology, – Business World

Minister for Business, Enterprise and Innovation, Heather Humphreys and Minister for Training, Skills, Innovation, Research and Development, John Halligan today announced a Government investment of 49 million through Science Foundation Ireland in the Insight SFI Research Centre for Data Analytics.

This Government investment will secure a further 100 million from industry and other international sources, such as the European Union, over the next six years to further harness the power of data analytics, machine learning and artificial intelligence (AI).

Through this new investment, Insight will continue its research via a set of three demonstrator projects under the themes -Augmented Human, Smart Enterprise and Sustainable Societies. In addition, it will significantly expand its Education and Outreach Programme, including a new Citizen Science initiative.

Insight was established in 2013 through an initial SFI investment of 43 million and has delivered an economic impact of 593m to the Irish economy. For every 1 of state investment, 5.54 is returned to the economy on an overall leveraged basis.

This funding was supplemented by 63 million from EU sources and industry. That means for every 1 of SFI funding, another 1.46 in additional investment has come from those other sources. During this period Insight has produced over 2,000 publications, trained 184 postdoctoral graduates and established 11 spin out companies and with this new funding will continue to develop these outputs.

Insight was established in 2013 and is hosted at four higher education institutions - Dublin City University, National University Ireland Galway, University College Cork and University College Dublin and works in partnership with Maynooth University, Trinity College Dublin, Tyndall National Institute and University of Limerick.

Commenting on the announcement, Science Foundation Irelands Director General and Chief Scientific Adviser to the Government of Ireland, Professor Mark Ferguson said, "Insights research is equipping indigenous Irish companies to harness the power of data analytics, machine learning and AI to become more competitive and open new markets. The SFI Research Centres continue to attract and retain multinational organisations who want to conduct high value research in Ireland. Centres like Insight are seeding the next generation of world class innovators in our universities."

Minister Humphreys added, "Many traditional job roles are changing, and with Brexit and other international challenges on the horizon, we must continue to plan ahead, focus on what is within our control domestically and be the masters of our own destiny. Insight is playing an important role in our plans to prepare now for tomorrows world by keeping Ireland at the cutting edge of innovation in this important sector."

Source: http://www.businessworld.ie

Read the original here:

Government invests 49m in data analytics, machine learning and AI Ireland, news for Ireland, FDI,Ireland,Technology, - Business World

10 Machine Learning Techniques and their Definitions – AiThority

When one technology replaces another, its not easy to accurately ascertain how the new technology would impact our lives. With so much buzz around the modern applications of Artificial Intelligence, Machine Learning, and Data Science, it becomes difficult to track the developments of these technologies. Machine Learning, in particular, has undergone a remarkable evolution in recent years. Many Machine Learning (ML) techniques have come in the foreground recently, most of which go beyond the traditionally simple classifications of this highly scientific Data Science specialization.

Read More: Beyond RPA And Cognitive Document Automation: Intelligent Automation At Scale

Lets point out the top ML techniques that the industry leaders and investors are keenly following, their definition, and commercial application.

Perceptual Learning is the scientific technique of enabling AI ML algorithms with better perception abilities to categorize and differentiate spatial and temporal patterns in the physical world.

For humans, Perceptual Learning is mostly instinctive and condition-driven. It means humans learn perceptual skills without actual awareness. In the case of machines, these learning skills are mapped implicitly using sensors, mechanoreceptors, and connected intelligent machines.

Most AI ML engineering companies boast of developing and delivering AI ML models that run on an automated platform. They openly challenge the presence and need for a Data Scientist in the Engineering process.

Automated Machine Learning (AutoML) is defined as the fully automating the entire process of Machine Learning model development right up till the process of its application.

AutoML enables companies to leverage AI ML models in an automated environment without truly seeking the involvement and supervision of Data Scientists, AI Engineers or Analysts.

Google, Baidu, IBM, Amazon, H2O, and a bunch of other technology-innovation companies already offer a host of AutoML environment for many commercial applications. These applications have swept into every possible business in every industry, including in Healthcare, Manufacturing, FinTech, Marketing and Sales, Retail, Sports and more.

Bayesian Machine Learning is a unique specialization within AI ML projects that leverage statistical models along with Data Science techniques. Any ML technique that uses the Bayes Theorem and Bayesian statistical modeling approach in Machine Learning fall under the purview of Bayesian Machine Learning.

The contemporary applications of Bayesian ML involves the use of open-source coding platform Python. Unique applications include

A good ML program would be expected to perpetually learn to perform a set of complex tasks. This learning mechanism is understood from the specialized branch of AI ML techniques, called Meta-Learning.

The industry-wide definition for Meta-Learning is the ability to learn and generalize AI into different real-world scenarios encountered during the ML training time, using specific volume and variety of data.

Meta-Learning techniques can be further differentiated into three categories

In each of these categories, there is a unique learner, meta-learner, and vectors with labels that match Data-Time-Spatial vectors into a set of networking processes to weigh real-world scenarios labeled with context and inferences.

All the recent Image Processing and Voice Search techniques use the Meta-Learning techniques for their outcomes.

Adversarial ML is one of the fastest-growing and most sophisticated of all ML techniques. It is defined as the ML technique adopted to test and validate the effectiveness of any Machine Learning program in an adverse situation.

As the name suggests, its the antagonistic principle of genuine AI, but used nonetheless to test the veracity of any ML technique when it encounters a unique, adverse situation. It is mostly used to fool an ML model into doubting its own results, thereby leading to a malfunction.

Most ML models are capable of generating answer for one single parameter. But, can it be used to answer for x (unknown or variable) parameter. Thats where the Causal Inference ML techniques comes into play.

Most AI ML courses online are teaching Causal inference as a core ML modeling technique. Causal inference ML technique is defined as the causal reasoning process to draw a unique conclusion based on the impact variables and conditions have on the outcome. This technique is further categorized into Observational ML and Interventional ML, depending on what is driving the Causal Inference algorithm.

Also commercially popularized as Explainable AI (X AI), this technique involves the use of neural networking and interpretation models to make ML structures more easily understood by humans.

Deep Learning Interpretability is defined as the ML specialization to remove black boxes in AI models, providing decision-makers and data officers to understand data modeling structures and legally permit the use of AI ML for general purposes.

The ML technique may use one or more of these techniques for Deep Learning Interpretation.

Any data can be accurately plotted using graphs. In Machine Learning techniques, a graph is a data structure consisting of two components, Vertices (or nodes) and Edges.

Graph ML networks is a specialized ML technique used to connect problems with edges and graphs. Graph Neural Networks (NNs) give rise to the category of Connected NNs (CNSS) and AI NNs (ANN).

There are at least 50 more ML techniques that could be learned and deployed using various NN models and systems. Click here to know of the leading ML companies that are constantly transforming Data Science applications with AI ML techniques.

(To share your insights about ML techniques and commercial applications, please write to us at info@aithority.com)

More:

10 Machine Learning Techniques and their Definitions - AiThority

Appearance of proteins used to predict function with machine learning – Drug Target Review

Researchers have used a machine-learning algorithm to study protein appearance and discover common features that influence function, which could be used to design artificial cells.

Researchers at EPFL have developed a new way to predict a protein's interactions with other proteins and biomolecules and its biochemical activity, merely by observing its surface (credit: Laura Persat / 2019 EPFL).

A new machine learning-driven technique has been able to predict the interactions between proteins and describe biochemical activity based on surface appearance.

The study was conducted at the Laboratory of Protein Design & Immunoengineering (LPDI), Switzerland, in collaboration with other researchers.

According to the team, the method, known as MaSIF, could support the development of protein-based components for artificial cells in novel therapeutics.

Scientists have developed a new way to predict a proteins interactions with other proteins and biomolecules and its biochemical activity, merely by observing its surface (credit: Laura Persat / 2019 EPFL).

The researchers took a vast set of protein surface data and fed the chemical and geometric properties into a machine-learning algorithm, training it to match these properties with particular behaviour patterns and activity. They used the remaining data to test the algorithm.

By scanning the surface of a protein, our method can define a fingerprint, which can then be compared across proteins, says Pablo Gainza, the first author of the study.

The team found that proteins performing similar interactions share common features.

The algorithm can analyse billions of protein surfaces per second, says LPDI director Bruno Correia. Our research has significant implications for artificial protein design, allowing us to program a protein to behave a certain way merely by altering its surface chemical and geometric properties.

The method could also be used to analyse the surface structure of other types of molecules, say the researchers.

The findings were published in Nature Methods.

Read the rest here:

Appearance of proteins used to predict function with machine learning - Drug Target Review

Google is using machine learning to make alarm tones based on the time and weather – The Verge

Google has an update that might make you hate your alarm a little bit less: a new feature lets it automatically change up what your alarm plays based on the time of day and the weather, theoretically playing something slightly more appropriate than the same awful song you hear day in and out. At least, itll be nice as long as youre okay with waking up to AI-generated piano.

The feature is confined to a single device for now: Lenovos Smart Clock, a small smart display that basically has the functionality of a Google Home Mini paired with a screen that can show the time and weather. Google says this feature which it calls Impromptu is part of Google Assistant, though, which suggests it should reach other smart displays, and perhaps even phones, in the future. The announcement doesnt say when or whether itll expand, however.

Google says all of the music is created and chosen by Magenta, an open-source music tool built around machine learning that Google has been creating. In a blog post, Google says the system might select this song if the weather is below 50 degrees (Im assuming Fahrenheit) and early in the morning. I dont know exactly what about this song says cool and pre-dawn, but Id be down to listen to anything other than the default alarm tone that Ive heard every day for years.

The feature is rolling out globally today to Lenovos device. The smart clock, which used to retail for $80, now appears to be down to $50, making it a lot more competitive with Amazons $60 Echo Show 5.

Excerpt from:

Google is using machine learning to make alarm tones based on the time and weather - The Verge

The NFL And Amazon Want To Transform Player Health Through Machine Learning – Forbes

The NFL and Amazon announced an expansion of their partnership at their annual AWS re:Invent ... [+] conference in Las Vegas that will use artificial intelligence and machine learning to combat player injuries. (Photo by Michael Zagaris/San Francisco 49ers/Getty Images)

Injury prevention in sports is one of the most important issues facing a number of leagues. This is particularly true in the NFL, due to the brutal nature of that punishing sport, which leaves many players sidelined at some point during the season. A number of startups are utilizing technology to address football injury issues, specifically limiting the incidence of concussions. Now, one of the largest companies in the world is working with the league in these efforts.

A week after partnering with the Seattle Seahawks on its machine learning/artificial intelligence offerings, Amazon announced a partnership Thursday in which the technology giant will use those same tools to combat football injuries. Amazon has been involved with the league, with its Next Gen Stats partnership, and now the two companies will work to advance player health and safety as the sport moves forward after its 100th season this year. Amazons AWS cloud services will use its software to analyze large volumes of player health data the league is already collecting. It will also scan video images with the objective of helping teams treat injuries and rehabilitate players more effectively. The larger goal will be to create a new Digital Athlete platform to anticipate injury before it even takes place.

This partnership expands the quickly growing relationship between the NFL and Amazon/AWS. as the two have already teamed up for two years with the leagues Thursday Night Football games streamed on the companys Amazon Prime Video platform. Amazon paid $130 million for rights that run through next season. The league also uses AWSs ML Solutions Lab,as well as Amazons SageMaker platform, that enables data scientists and developers to build and develop machine learning models that can also lead to the leagues ultimate goal of predicting and limiting player injury.

The NFL is committed to re-imagining the future of football, said NFL Commissioner Roger Goodell. When we apply next-generation technology to advance player health and safety, everyone wins from players to clubs to fans. The outcomes of our collaboration with AWS and what we will learn about the human body and how injuries happen could reach far beyond football. As we look ahead to our next 100 seasons, were proud to partner with AWS in that endeavor.

The new initiative was announced as part of Amazons AWS re:Invent conference in Las Vegas on Thursday. Among the technologies that AWS and the league announced in its Digital Athlete platform is a computer-simulated model of an NFL player that will model infinite scenarios within NFL gameplay in order to identify a game environment that limits the risk to a player. Digital Athlete uses Amazons full arsenal of technologies, including the AI, ML and computer vision technology that is used with Amazons Rekognition tool and that uses enormous data sets encompassing historical and more modern video to identify a wide variety of solutions, including the prediction of player injury.

By leveraging the breadth and depth of AWS services, the NFL is growing its leadership position in driving innovation and improvements in health and player safety, which is good news not only for NFL players but also for athletes everywhere, said Andy Jassy, CEO of AWS. This partnership represents an opportunity for the NFL and AWS to develop new approaches and advanced tools to prevent injury, both in and potentially beyond football.

These announcements come at a time when more NFL players are utilizing their large platforms to bring awareness to injuries and the enormous impact those injuries have on their bodies. Former New England Patriots tight end Rob Gronkowski has been one of the most productive NFL players at his position in league history but he had to retire from the league this year, at the age of 29, due to a rash of injuries.

The future Hall of Fame player estimated that he suffered probably 20 concussions in his football career. These admissions have significant consequences on youth participation rates in the sport. Partnerships like the one announced yesterday will need to be successful in order for the sport to remain on solid footing heading into the new decade.

View post:

The NFL And Amazon Want To Transform Player Health Through Machine Learning - Forbes

Scientists are using machine learning algos to draw maps of 10 billion cells from the human body to fight cancer – The Register

AI algorithms are helping scientists map ten billion cells from the human body in an attempt to unlock the mysteries of how life emerges from the embryo, or how diseases like cancer manifest.

Dana Peer, the current chair and professor in computational and systems biology at the Memorial Sloan Kettering Cancer Center, a research lab focused cancer treatment in New York, described machine learning as a toolbox for building the Human Cell Atlas. The project aims to turn data from billions of tissue sample cells into 3D maps so scientists can visualize our bodies down at the smallest units.

Life develops from a single embryo. How does that initial cell go on to produce nearly 40 trillion cells to form a human body that is able to move and think?

The process can be largely described by cell differentiation, where a stem cell morphs into a specialized unit that carries out a vital function for a particular organ. Genetic information stored in DNA and encoded into every cell carries instructions on how to build different cell types for the body.

Although scientists broadly understand the process, theyre still perplexed at how it works down at the cellular level.

Cells are like tiny computers that get input from their environment, signals [and nutrients] from other cells, Peer said on stage during the Conference on Neural Information Processing Systems in Vancouver on Tuesday.

They have all sorts of proteins that are their processing devices. They make decisions, they interact with each other due to their biochemistry and molecular biology, and decide whether theyre going to proliferate, make more copies of themselves, differentiate, enter a new cell type, activate, or release some molecule to talk with another cell. Theyre really like little computers and we want to know how they work.

The problem with studying cells is, however, the sheer amount of data they produce. The genetic code describing the RNA in cells from one tissue sample is represented as a series of numbers in a giant matrix. At first glance these matrices dont make much sense but they can be turned into 3D maps with the help of AI algorithms.

Common machine learning techniques and models like t-SNE, k-nearest neighbors, Markov chains, or even deep learning have allowed biologists to visualize the behavior of cells. The jumbled stream of numbers describing a cell can now be represented as a clear graph that clusters the data by cell type and function.

Scientists have managed to trace the source of acute lymphoblastic leukemia, the most common cancer in children, to a rare cell type that only crops up in seven out of 10,000 cells. Peer described how data visualization has also allowed researchers to discover how a single mutation in a pancreatic cell can lead to cancer.

The mutation tricks the immune system and it can no longer defend our bodies against cancer. All the knowledge gleaned from these visualizations can help scientists develop new drugs and methods that target diseases like cancer and speed up the process of clinical trials.

Peer hopes that by building the Human Cell Atlas, itll serve as a healthy reference for mapping disease. She called it a candy land playground for biologists. But although machine learning algorithms have already had a huge impact, the techniques are more successful in modelling common patterns in data rather than highlighting any anomalous behaviors.

Our goal is not to predict but understand, and in biology, the outlier is often the most important.

Sponsored: Technical Overview: Exasol Peek Under the Hood

The rest is here:

Scientists are using machine learning algos to draw maps of 10 billion cells from the human body to fight cancer - The Register

Measuring Employee Engagement with A.I. and Machine Learning – Dice Insights

A small number of companies have begun developing new tools to measure employee engagement without requiring workers to fill out surveys or sit through focus groups. HR professionals and engagement experts are watching to see if these tools gain traction and lead to more effective cultural and retention strategies.

Two of these companiesNetherlands-based KeenCorp and San Franciscos Cultivateglean data from day-to-day internal communications. KeenCorp analyzes patterns in an organizations (anonymized) email traffic to gauge changes in the level of tension experienced by a team, department or entire organization. Meanwhile, Cultivate analyzes manager email (and other digital communications) to provide leadership coaching.

These companies are likely to pitch to a ready audience of employers, especially in the technology space. With IT unemployment hovering around 2 percent, corporate and HR leaders cant help but be nervous about hiring and retention. When competition for talent is fierce, companies are likely to add more and more sweeteners to each offer until they reel in the candidates they want. Then theres the matter of retaining those employees in the face of equally sweet counteroffers.

Thats why businesses utilize a lot of effort and money on keeping their workers engaged. Companies spend more than $720 million annually on engagement, according to the Harvard Business Review. Yet their efforts have managed to engage just 13 percent of the workforce.

Given the competitive advantage tech organizations enjoy when their teams are happy and productivenot to mention the money they save by keeping employees in placeengagement and retention are critical. But HR cant create and maintain an engagement strategy if it doesnt know the workforces mindset. So companies have to measure, and they measure primarily through surveys.

Today, many experts believe surveys dont provide the information employers need to understand their workforces attitudes. Traditional surveys have their place, they say, but more effective methods are needed. They see the answer, of course, in artificial intelligence (A.I.) and machine learning (ML).

One issue with surveys is they only capture a part of the information, and thats the part that the employee is willing to release, said KeenCorp co-founder Viktor Mirovic. When surveyed, respondents often hold back information, he explained, leaving unsaid data that has an effect similar to unheard data.

I could try to raise an issue that you may not be open to because you have a prejudice, Mirovic added. If tools dont account for whats left unsaid and unheard, he argued, they provide an incomplete picture.

As an analogy, Mirovic described studies of combat aircraft damaged in World War II. By identifying where the most harm occurred, designers thought they could build safer planes. However, the study relied on the wrong data, Mirovic said. Why? Because they only looked at the planes that came back. The aircraft that presumably suffered the most grievous damagethose that were shot downwerent included in the research.

None of this means traditional surveys surveys dont provide value. I think the traditional methods are still useful, said Alex Kracov, head of marketing for Lattice, a San Francisco-based workforce management platform that focuses on small and mid-market employers. Sometimes just the idea of starting to track engagement in the first place, just to get a baseline, is really useful and can be powerful.

For example, Lattice itself recently surveyed its 60 employees for the first time. It was really interesting to see all of the data available and how people were feeling about specific themes and questions, he said. Similarly, Kracov believes that newer methods such as pulse surveyswhich are brief studies conducted at regular intervalscan prove useful in monitoring employee satisfaction, productivity and overall attitude.

Whereas surveys require an employees active participation, the up-and-coming tools dont ask them to do anything more than their work. When KeenCorps technology analyzes a companys email traffic, its looking for changes in the patterns of word use and compositional style. Fluctuations in the products index signify changes in collective levels of tension. When a change is flagged, HR can investigate to determine why attitudes are in flux and then proceed accordingly, either solving a problem or learning a lesson.

When I ask you a question, you have to think about the answer, Mirovic said. Once you think about the answer, you start to include all kinds of other attributes. You know, youre my boss or youve just given me a raise or youre married to my sister. Those could all affect my response. What we try to do is go in as objectively as possible, without disturbing people as we observe them in their natural habitats.

See the original post here:

Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights