US arrests three in alleged USD 722 mn cryptocurrency fraud – Business Standard

US authorities arrested three men in an alleged fraud that raised USD 722 million from investors lured by fake bitcoin mining earnings, the Justice Department announced Tuesday.

Prosecutors described the scam as a "high-tech Ponzi scheme" run by the "BitClub Network," which took money from investors and rewarded them for recruiting new shareholders.

From April 2014 through December 2019 the group attracted unsuspecting investors using fraudulent earnings purported to come from the network's mining pool, according to the statement.

The scheme was orchestrated from Passaic, New Jersey and constituted a "worldwide fraudulent scheme," according to an indictment signed by US Attorney Craig Carpenito of New Jersey.

In messages with his co-conspirators, defendant Matthew Brent Goettsche referred to investors as "dumb" and said he was "building the whole model on the backs of idiots" as he directed others to manipulate the figures, the Justice Department said.

Defendants "are accused of deploying elaborate tactics to lure thousands of victims with promises of large returns on their investments in a bitcoin mining pool," said Paul Delacourt, assistant director with the FBI's Los Angeles office.

"The defendants allegedly made hundreds of millions of dollars by continuing to recruit new investors over several years while spending victims' money lavishly."

The Justice Department charged Goettsche and Jobadiah Sinclair Weeks, both of Colorado, with conspiracy to commit wire fraud.

The two men were also charged with conspiracy to offer and sell unregistered securities, along with the third defendant, Joseph Frank Abel of California.

Justice Department officials said two other defendants remained at large and their identities are being withheld.

(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)

Read this article:
US arrests three in alleged USD 722 mn cryptocurrency fraud - Business Standard

FinCEN Director Notes Improved Oversight of Cryptocurrency Industry – Cointelegraph

The director of the Financial Crimes Enforcement Network (FinCEN) says the cryptocurrency industry has begun to fall in line with the agencys regulations on money transmission services.

In a speech delivered at the American Bankers Association/American Bar Association Financial Crimes Enforcement Conference on Dec. 10, Kenneth A. Blanco claimed that FinCENs May 2019 guidance was having a marked and positive impact on its oversight of the crypto space.

In May, FinCEN published guidance for crypto businesses that clarified how its regulations relating to money services businesses (MSBs) apply to certain business models in the crypto industry and carry specific obligations under the United States Bank Secrecy Act.

In his remarks, Blanco noted that since its publication, the agency has seen a significant increase in Suspicious Activity Reports (SAR): a total of 11,000, of which roughly two thirds (7,100) are from crypto businesses, including kiosks, exchanges, and peer-to-peer exchangers.

Ahead of May, he noted, filings from entities in the crypto space had accounted for markedly less around half of the SARs the agency received.

Moreover, he observed that crypto businesses are increasingly internalizing the agencys key advisory terms and using them in their filings directly. He said he considers this to be an encouraging trend and a sign that the industry is making use of FinCENs red flags and duly reporting suspicious activity.

As regards the content of the reports, Blanco said that the agency has observed an increase in filings from exchanges that identify possibly unregistered, overseas MSBs specifically, Venezuela-based peer-to-peer exchangers.

There has also been an increase in reporting of customers conducting crypto transactions with wallets linked to darknet marketplaces, as well as on activity that appears characteristic of scam victims particularly novice crypto users, including the elderly.

Blanco closed his remarks with an appeal to businesses that are yet to abide by the agencys guidance:

I think it is important for all financial institutions to ask themselves whether they are reporting such suspicious activity. If the answer is no, they need to reevaluate whether their institutions are exposed to cryptocurrency.

Blancos speech confirms a persistent trend he had noted during a speech this August, when he revealed FinCEN was seeing a surge in SARs, with filings at the time exceeding 1,500 per month.

That same month, he directly appealed to casinos dealing crypto payments to consider how they will conduct due diligence and comply with their reporting obligations.

This fall, the U.S. House of Representatives passed a bill requiring the Director of FinCEN to conduct a study on the use of emerging technologies, including blockchain, within the agency.

See the original post here:
FinCEN Director Notes Improved Oversight of Cryptocurrency Industry - Cointelegraph

Five Men Charged With Running A $722 Million Cryptocurrency Fraud Scheme Built On The Backs Of Idiots – BroBible

Things have been pretty quiet on the cryptocurrency front in 2019.

After dominating financial news in 2017 and 2018, mostly with numerous warnings and stories of scams being perpetrated, the buzz has tapered off with barely a scandal to report.

That all changed this week when U.S. Attorney Craig Carpenito filed a 27-page indictment with the U.S. District Court in Newark, New Jersey.

In the indictment, five men were charged with conspiracy to commit wire fraud and conspiracy to offer and sell unregistered securities in connection with a cryptocurrency scam that bilked investors out of an eye-popping $722 million.

From April 2014 to December 2019, these five men allegedly ran a business called BitClub Network, that according to court documents was described as built on the backs of idiots by one of the defendants.

The indictment describes the defendants use of the complex world of cryptocurrency to take advantage of unsuspecting investors, U.S. Attorney Carpenito said. What they allegedly did amounts to little more than a modern, high-tech Ponzi scheme that defrauded victims of hundreds of millions of dollars. Working with our law enforcement partners here and across the country, we will ensure that these scammers are held to account for their crimes.

Those arrested today are accused of deploying elaborate tactics to lure thousands of victims with promises of large returns on their investments in a bitcoin mining pool, an advanced method of profiting on cryptocurrency, Paul Delacourt, the Assistant Director in Charge of the FBIs Los Angeles Field Office said. The defendants allegedly made hundreds of millions of dollars by continuing to recruit new investors over several years while spending victims money lavishly.

Todays indictment alleges the defendants were involved in a sophisticated Ponzi scheme involving hundreds of millions of dollars that preyed upon investors all over the world, John R. Tafur, Special Agent in Charge, IRS Criminal Investigation, Newark Field Office, said. This was a classic con game with a virtual twist; false promises of large returns for investing in the mining of Bitcoin. IRS Criminal Investigation will continue to work with our law enforcement partners, including the Joint Chiefs of Global Tax Enforcement, to investigate and bring to justice cyber criminals.

According to documents and statements made in court, one of the defendants, Matthew Brent Goettsche, 37, of Lafayette, Colorado, discussed he and his conspirators target audience were going to be dumb investors, referring to them as sheep, and said he was building this whole model on the backs of idiots.

Another defendant, Joseph Frank Abel, 49, of Camarillo, California, assured investors that BitClub Network was too big to fail.

Thatll be a cool story for them to tell their new roomates as the wire fraud conspiracy charge carries a maximum sentence of 20 years in prison, and the conspiracy to sell unregistered securities charge carries a maximum sentence of another five years in the slammer. Each charge also carries a fine of up to $250,000 if found guilty.

[NBC News]

Original post:
Five Men Charged With Running A $722 Million Cryptocurrency Fraud Scheme Built On The Backs Of Idiots - BroBible

Machine learning results: pay attention to what you don’t see – STAT

Even as machine learning and artificial intelligence are drawing substantial attention in health care, overzealousness for these technologies has created an environment in which other critical aspects of the research are often overlooked.

Theres no question that the increasing availability of large data sources and off-the-shelf machine learning tools offer tremendous resources to researchers. Yet a lack of understanding about the limitations of both the data and the algorithms can lead to erroneous or unsupported conclusions.

Given that machine learning in the health domain can have a direct impact on peoples lives, broad claims emerging from this kind of research should not be embraced without serious vetting. Whether conducting health care research or reading about it, make sure to consider what you dont see in the data and analyses.

advertisement

One key question to ask is: Whose information is in the data and what do these data reflect?

Common forms of electronic health data, such as billing claims and clinical records, contain information only on individuals who have encounters with the health care system. But many individuals who are sick dont or cant see a doctor or other health care provider and so are invisible in these databases. This may be true for individuals with lower incomes or those who live in rural communities with rising hospital closures. As University of Toronto machine learning professor Marzyeh Ghassemi said earlier this year:

Even among patients who do visit their doctors, health conditions are not consistently recorded. Health data also reflect structural racism, which has devastating consequences.

Data from randomized trials are not immune to these issues. As a ProPublica report demonstrated, black and Native American patients are drastically underrepresented in cancer clinical trials. This is important to underscore given that randomized trials are frequently highlighted as superior in discussions about machine learning work that leverages nonrandomized electronic health data.

In interpreting results from machine learning research, its important to be aware that the patients in a study often do not depict the population we wish to make conclusions about and that the information collected is far from complete.

It has become commonplace to evaluate machine learning algorithms based on overall measures like accuracy or area under the curve. However, one evaluation metric cannot capture the complexity of performance. Be wary of research that claims to be ready for translation into clinical practice but only presents a leader board of tools that are ranked based on a single metric.

As an extreme illustration, an algorithm designed to predict a rare condition found in only 1% of the population can be extremely accurate by labeling all individuals as not having the condition. This tool is 99% accurate, but completely useless. Yet, it may outperform other algorithms if accuracy is considered in isolation.

Whats more, algorithms are frequently not evaluated based on multiple hold-out samples in cross-validation. Using only a single hold-out sample, which is done in many published papers, often leads to higher variance and misleading metric performance.

Beyond examining multiple overall metrics of performance for machine learning, we should also assess how tools perform in subgroups as a step toward avoiding bias and discrimination. For example, artificial intelligence-based facial recognition software performed poorly when analyzing darker-skinned women. Many measures of algorithmic fairness center on performance in subgroups.

Bias in algorithms has largely not been a focus in health care research. That needs to change. A new study found substantial racial bias against black patients in a commercial algorithm used by many hospitals and other health care systems. Other work developed algorithms to improve fairness for subgroups in health care spending formulas.

Subjective decision-making pervades research. Who decides what the research question will be, which methods will be applied to answering it, and how the techniques will be assessed all matter. Diverse teams are needed not just because they yield better results. As Rediet Abebe, a junior fellow of Harvards Society of Fellows, has written, In both private enterprise and the public sector, research must be reflective of the society were serving.

The influx of so-called digital data thats available through search engines and social media may be one resource for understanding the health of individuals who do not have encounters with the health care system. There have, however, been notable failures with these data. But there are also promising advances using online search queries at scale where traditional approaches like conducting surveys would be infeasible.

Increasingly granular data are now becoming available thanks to wearable technologies such as Fitbit trackers and Apple Watches. Researchers are actively developing and applying techniques to summarize the information gleaned from these devices for prevention efforts.

Much of the published clinical machine learning research, however, focuses on predicting outcomes or discovering patterns. Although machine learning for causal questions in health and biomedicine is a rapidly growing area, we dont see a lot of this work yet because it is new. Recent examples of it include the comparative effectiveness of feeding interventions in a pediatric intensive care unit and the effectiveness of different types of drug-eluting coronary artery stents.

Understanding how the data were collected and using appropriate evaluation metrics will also be crucial for studies that incorporate novel data sources and those attempting to establish causality.

In our drive to improve health with (and without) machine learning, we must not forget to look for what is missing: What information do we not have about the underlying health care system? Why might an individual or a code be unobserved? What subgroups have not been prioritized? Who is on the research team?

Giving these questions a place at the table will be the only way to see the whole picture.

Sherri Rose, Ph.D., is associate professor of health care policy at Harvard Medical School and co-author of the first book on machine learning for causal inference, Targeted Learning (Springer, 2011).

See the original post:

Machine learning results: pay attention to what you don't see - STAT

Automation And Machine Learning: Transforming The Office Of The CFO – Forbes

By Steve Dunne, Staff Writer, Workday

In a recentMcKinsey survey,only 13 percent of CFOs and other senior business executives polled said their finance organizations use automation technologies, such as robotic process automation (RPA) and machine learning. Whats more, when asked how much return on investment the finance organization has generated from digitization and automation in the past 12 months, only 5 percent said it was a substantial return; the more common response was modest or minimal returns.

While that number may seem low right now, automation is coming to the finance function, and it will play a crucial role in furthering the CFOs position in the C-suite. Research suggests corporate finance teams spend about 80 percent of their time manually gathering, verifying, and consolidating data, leaving only about 20 percent for higher-level tasks, such as analysis and decision-making.

In its truest form, RPA will unleash a new wave of digital transformation in corporate finance. Instead of programming software to perform certain tasks automatically, RPA uses software robots to process transactions, monitor compliance, and audit processes automatically. This could slash thenumber of required manual tasks, helping to drive out errors and increase the efficiency of finance processeshanding back time to the CFO function to be more strategic.

According to the report Companies Using AI Will Add More Jobs Than They Cut, companies that had automated at least 70 percent of their business processes compared to those that had automated less than 30 percent discovered that more automation translated into more revenue. In fact, the highly automated group was six times more likely to have revenue growth of 15 percent per year or more.

In the right hands, automation and machine learning can be a fantastic combination for CFOs to transform the finance function, yet success will depend on automating the right tasks. The first goal for a finance team should be to automate the repetitive and transactional tasks that consume the majority of its time. Doing this will free finance up to be more of a strategic advisor to the business. AnAdaptive Insights surveyfound that over 40 percent of finance leaders say that the biggest driver behind automation within their organizations is the demand for faster, higher-quality insights from executives and operational stakeholders.

Accentures global talent and organization lead for financial services, Andrew Woolf, says the challenge for businesses is to pivot their workforce to enter an entirely new world where human ingenuity meets intelligent technology to unlock new forms of growth.

Transaction processing is one of the major barriers preventing finance from achieving transformation and the ultimate goal of delivering a better business partnership. It's not surprising that its the first port of call for CFOs looking toward automation.

RPA combined with machine learning provides finance leaders with a great way of optimising the way they manage their accounting processes. This has been a painful area of finance for such a long time and can have a direct impact on an organizations cash flow, says Tim Wakeford, vice president, financials product strategy, EMEA at Workday. Finance spends a huge amount of time sifting through invoices and other documentation to manually correct errors in the general ledger, while machine learning could automate this, helping to intelligently match payments with invoices.

Machine learning can also mitigate financial risk by flagging suspect payments to vendors in real time. Internal and external fraud costs businesses billions of dollars each year. The current mechanism for mitigating such instances of fraud is to rely on manual audits on a sample of invoices. This means looking at just a fraction of total payments, and is the proverbial needle in the haystack approach to identifying fraud and mistakes. Machine learning can vastly increase the volume of invoices which can be checked and analyzed to ensure that organizations are not making duplicate or fraudulent payments.

Ensuring compliance to federal and international regulations is a critical issue for financial institutions, especially given the increasingly strict laws targeting money laundering and the funding of terrorist activities, explains David Axson, CFO strategies global lead, Accenture Strategy. At one large global bank, up to 10,000 staffers were responsible for identifying suspicious transactions and accounts that might indicate such illegal activities. To help in those efforts, the bank implemented an AI system that deploys machine-learning algorithms that segment the transactions and accounts and sets the optimal thresholds for alerting people to potential cases that might require further investigation.

Read the second part of this story, How Automation and Machine Learning Are Reshaping the Finance Function, which takes a closer look at how automation and machine learning can drive change.

This story was originally published on theWorkday blog. For more stories like this, clickhere.

Follow Workday:LinkedIn,Facebook, andTwitter.

Original post:

Automation And Machine Learning: Transforming The Office Of The CFO - Forbes

Qualitest Acquires AI and Machine Learning Company AlgoTrace to Expand Its Offering – PRNewswire

LONDON, Dec. 12, 2019 /PRNewswire/ --Qualitest, the world's largest software testing and quality assurance company, has acquired AI and machine learning company AlgoTrace for an undisclosed amount. This acquisition marks the first step of Qualitest's growth strategy following an investment from Bridgepoint earlier this year.

The acquisition will allow Qualitest to radically expand the number of AI-powered testing solutions available to clients, as well as develop its capabilities in assisting companies test and launch new AI-powered solutions with greater confidence and speed. As software grows in complexity and the pressure to launch faster and more frequently increases, according to Gartner, companies that do not use AI to enhance their Quality Assurance will be at a significant disadvantage.

AlgoTrace's machine learning tools help brands answer business critical questions as they launch new software: what, where, when, and how to test and in what order to ensure consistently high quality. With multiple clients already using Qualitest's suite of AI-testing tools, this expansion of capabilities creates opportunities not only for new Qualitest clients, but also allows for the growth of existing relationships with current customers around the world.

Qualitest began working with the AlgoTrace team more than a year ago, with AlgoTrace's AI platform powering Qualitest's market-leading test predictor tool, which applies pioneering autonomous AI capabilities and predictive modeling to unstructured data without the need for code or complex interfaces. Following multiple successful joint projects, the teams saw that, together, they would be able to apply AlgoTrace's powerful prediction engine in a variety of ways across the software development lifecycle to improve quality and speed to market.

Qualitest's AI-testing solutions have two main features focused on increasing confidence and assurance. First, to assist and enhance quality assurance efforts giving brands, in a more rapid fashion, high levels of confidence that software releases will go smoothly. Second, helping companies who are using AI in their own offerings to have a higher level of confidence that their AI algorithms are generating correct, unbiased results.

Ron Ritter, CEO at AlgoTrace, said: "We are thrilled to be joining with Qualitest. Following successful implementations with the company in the past, we have complete faith that we will help Qualitest change the testing paradigm forever enhancing their quality engineering with machine learning. While there is a lot of hype surrounding AI, we're deploying real, hard-nosed and practical tools that significantly change the rules."

Norm Merritt, CEO of Qualitest, said:"Applying AI to quality engineering is a perfect fit. Just as software becomes increasingly complex, the companies producing it are under competitive pressure to increase the speed and frequency of their rollouts. AI is the only way companies can scale software testing and quality engineering and the AlgoTrace team have shown that they understand this. In our view, companies that do not use AI to improve quality will be at a significant disadvantage."

Aviram Shotten, Chief Knowledge and Innovation Officer at Qualitest, said: "Ron and his team are just the kind of innovators we love: smart, customer-obsessed and attacking a big market problem with cutting edge technology. This acquisition will not only help us accelerate AI adoption within quality engineering by providing a holistic solution to our clients, it provides an avenue for our teams to access AlgoTrace's unique expertise to build new models, tools and solutions to improve how technology is developed, tested and deployed."

About Qualitest

Qualitest is the world's largest independent managed services provider of quality assurance and testing solutions. As a strategic partner, Qualitest helps brands move beyond functional testing to adopt new innovations such as automation, AI, and crowd-sourced UX testing. It leverages its domain expertise across industries, including financial services, media and entertainment, retail, consumer goods, technology, gaming, telecom, among others. Qualitest's global service delivery platform includes the United States, Israel, UK, India and Romania. To learn more about Qualitest, visit http://www.qualitestgroup.com.

About AlgoTrace

AlgoTracewas founded in 2016 as a data science company that focuses on building automated machine learning tools. It builds the tools data analystsand data scientists need to simplify and accelerate prediction modelling processes. Its software tool helps organizations to take the right decisions based on clear inputs that are based on facts and patterns discovered by our prediction engine. Its mission is to empower data scientists and analysts to create accurate and stable prediction models faster than ever.

SOURCE Qualitest

Home

Read the original:

Qualitest Acquires AI and Machine Learning Company AlgoTrace to Expand Its Offering - PRNewswire

Industry Call to Define Universal Open Standards for Machine Learning Operations and Governance – MarTech Series

Defining open standards is essential for deploying and governing machine learning models at scale for enterprise businesses

Cloudera, the enterprise data cloud company, asks for industry participation in defining universal open standards for machine learning operations (MLOps) and machine learning model governance. By contributing to these standards, the community can help companies make the most of their machine learning platforms and pave the way for the future of MLOps.

Machine learning models are already part of almost every aspect of our lives from automating internal processes to optimizing the design, creation, and marketing behind virtually every product consumed, said Nick Patience, founder and research vice president, software at 451 Research. As ML proliferates, the management of those models becomes challenging, as they have to deal with issues such as model drift and repeatability that affect productivity, security and governance. The solution is to create a set of universal, open standards so that machine learning metadata definitions, monitoring, and operations become normalized, the way metadata and data governance are standardized for data pipelines.

Marketing Technology News: TEC Eurolab Doubles Productivity With Data Management Solution from Cohesity and HPE

At Cloudera, we dont want to solve the challenge of deploying and governing machine learning models at scale only for our customers, we agree it needs to be addressed at the industry level. Apache Atlas is the best positioned framework to integrate data management and explainable, interoperable, and reproducible MLOps workflows, said Doug Cutting, chief architect at Cloudera. The Apache Atlas (Project) fits all the needs for defining ML metadata objects and governance standards. It is open-source, extensible, and has pre-built governance features.

Industry Call for Standards

Open source and open APIs have powered the growth of data science in business. But deploying and managing models in production is often difficult because of technology sprawl and siloing, said Peter Wang, CEO of Anaconda. Open standards for ML operations can reduce the clutter of proprietary technologies and give businesses the agility to focus on innovation. We are very pleased to see Cloudera lead the charge for this important next step.

As leaders in creating a machine learning oriented data strategy across our organization, we know what is required to address the challenges with deploying ML models into production at scale and building an ML-driven business, said Daniel Stahl, SVP model platforms at Regions Financial Corporation. A fundamental set of model design principles enables the repeatable, transparent, and governed approaches necessary for scaling model development and deployment. We join Cloudera in calling for open industry standards for machine learning operations.

Marketing Technology News: IBM AI Innovations Sharpen Risk Detection in Identity Management

At Santander, we focus on using machine learning to preemptively fight fraud and protect our customers, said Luan Vasconcelos Corumba, data science leader for fraud prevention at Santander Bank. Because there are many different types of fraud across many channels; scaling and maintaining this effort requires dynamic approaches to monitoring and governing models with sometimes hundreds of features to check on an ongoing weekly basis. We endorse these standards because establishing and implementing open universal standards for our production ML workflows can not only help us better protect our customers but will also enable our teams to drive adoption and deliver cost-effective, accurate predictions continuously.

Marketing Technology News: OneTrust Expands ID Verification Partner Program to Simplify the CCPA Consumer Rights Validation Process

View post:

Industry Call to Define Universal Open Standards for Machine Learning Operations and Governance - MarTech Series

Schneider Electric Wins ‘AI/ Machine Learning Innovation’ and ‘Edge Project of the Year’ at the 2019 SDC Awards – PRNewswire

LONDON, Dec. 12, 2019 /PRNewswire/ --Schneider Electric,the leader in digital transformation of energy management and automation, has today announced that it has won two categories at the 2019 SDC Awards for 'AI/Machine Learning Innovation of the Year' and 'edge project of the year.'

"I'm delighted to accept these prestigious awards on behalf of Schneider Electric," said Marc Garner Vice President, Secure Power Division UK&I. "As the industry's next-generation data centre infrastructure management (DCIM) platform, EcoStruxure IT leverages AI and ML technologies to proactively prevent downtime in data centre and edge computing environments. The software also provides end-users and partners with increased visibility that streamlines servicing and improves both operational and energy efficiency, something, which was instrumental for the Wellcome Sanger Institute."

The award for 'AI/Machine Learning Innovation of the Year' was presented to Schneider Electric for theirnext-generation DCIM platform EcoStruxure IT, which brings secure, vendor agnostic, wherever-you-go monitoring for all IoT-enabled physical infrastructure assets. Withthe ability to integrate securely with other manufacturer applications, the software delivers complete visibility into today's data centre and edge environments, from anywhere, at any time and on any device via the cloud.

In collaboration with Elite Channel Partner EfficiencyIT (EIT), Schneider Electric was awarded a second accolade for 'edge project of the year' for work completed for prestigious customer, the Wellcome Sanger Institute. The Wellcome Sanger Institute is one of the world leaders in genomic research and its research deals with some of the biggest medical research questions across the biggest challenges in human diseases - from cancer and malaria to measles and cholera.

Essential to the research function are the Institute's DNA sequencing machines, which produce terabytes of raw information each day. Due to the vast quantity of data, the criticality of local applications and the need for ultra-low latency, cloud hosting would present them with a number of complications and incur significant connectivity costs. The Institute, therefore, hosts Europe's largest, on-premise genomic data centre and uses its high-performance processing capabilities to store and analyse data in real-time.

Under the guidance of EIT, Sanger has deployed Schneider Electric's EcoStruxure IT to proactively manage the data centre, and to improve energy efficiency and resiliency. The campus has issues with power reliability, and any outage could result in loss of important genomic data and costly replacement of sequencing chemicals. Therefore, to protect the laboratory processes from downtime, the Institute has installed individual Schneider Electric Smart-UPS uninterruptible power supplies on each of its sequencers.

"EcoStruxure IT was selected due to its open-based architecture, which allows us to integrate with the technology already in place on campus, and because we considered it best-in-class for the Institute's requirements," said Simon Binley, Data Centre Manager, Wellcome Sanger Institute. "The platform provides us with increased visibility into the entire data centre environment and enables us to improve energy efficiency, meaning in time, more funding will be available for critical research that will benefit all of humankind."

Tofind out more about Schneider Electric's next generation DCIM platform EcoStruxure IT, please click here.

About Schneider Electric

At Schneider, we believe access to energy and digital is a basic human right. We empower all to make the most of their energy and resources, ensuring Life Is On everywhere, for everyone, at every moment.

We provide energy and automation digital solutions for efficiency and sustainability. We combine world-leading energy technologies, real-time automation, software and services into integrated solutions for Homes, Buildings, Data Centers, Infrastructure and Industries.

We are committed to unleash the infinite possibilities of an open, global, innovative community that is passionate about our Meaningful Purpose, Inclusive and Empowered values.

https://www.se.com/uk/en/

Related resources:

Follow us on:

Hashtags: #LifeIsOn #EcoStruxure #edgecomputing #DCIM

SOURCE Schneider Electric

https://www.se.com/uk/en/

More:

Schneider Electric Wins 'AI/ Machine Learning Innovation' and 'Edge Project of the Year' at the 2019 SDC Awards - PRNewswire

IQVIA on the adoption of AI and machine learning – OutSourcing-Pharma.com

Artificial intelligence (AI) and machine learning (ML) have become central topics in the pharma industry in 2019. Greater levels of investment are being funneled in this direction and a greater number of partnerships have sprung up around the areas.

The potential in relation to the pharma industry have often centered around drug discovery. The potential is there for the technology to reduce the cost of developing a new drug, which has been estimated to be approximately $2-3bn (1.8-2.7bn).

As a result, a number of large pharma companies have signed partnership deals to unlock the promise of faster drug discovery, such as Pfizers deal with CytoReason and Novo Nordisks with e-therapeutics.

Wider than this, there is the potential to improve patient recruitment to clinical trials, another notorious stumbling block in drug development.

Outsourcing-Pharma (OSP) asked Yilian Yuan (YY), SVP of data science and advanced analytics at Iqvia, for analysis on how the pharma industry is approaching the opportunity provided by AI and ML so far, and how this is likely to develop over the coming years.

OSP: How would you characterize the pharma industrys adoption of AI, so far?

YY: In general, the pharma industry recognizes the value of AI/ML, and some pharma companies have made significant investments to build the infrastructure and talent pool necessary to bring AI/ML capabilities into the R&D and commercial sectors. However, implementation can be challenging. To overcome these challenges pharma companies should undertake the following steps:

Because of the many challenges, pharma has been slow to take up AI/ML. Extra effort will be needed for the industry to fully leverage AI/ML and to realize the positive impact on business and improved patient care.

OSP: How do you see further adoption of AI/ML for pharma in 2020 and beyond?

YY: I see more and more pharma companies taking various approaches to realize the value of AI/ML, and they fall into two categories:

We also see some companies taking a combination approach to get the benefits of both.

OSP: The industry generally has a reputation of being cautious when it comes to the adoption of new technologies what are the dangers of this when applied to AI/ML?

YY: The development and commercialization of innovative treatment options for the market is costly and competitive. AI/ML can leverage real-world data to innovate clinical trial design and execution, e.g., smart patient recruitment, and select sites that can quickly enroll patients. On the commercialization side, AI/ML enables proactive and precise engagements with health care providers and patients and the ability to identify patients with high risk of exacerbation or noncompliance with the trial regimen, which can trigger interventions by nurse educators.

Pharma companies that are slow to adopt AI/ML will be left behind in the race to bring new products to market and the right products to the right patients at the right time.

OSP: Are there are any particular areas of drug discovery where the technology can have the most impact?

YY: There are many areas where AI/ML will have a positive impact on drug discovery:

OSP: Are there any noteworthy industries that are leading the way in using this technology? What can the pharma industry learn from them?

YY: The automotive industry faces fierce competition and has leveraged AI/ML to do precision marketing on its websites, with tailored messages and select products for potential buyers. Perhaps pharma can learn from them and use AI/ML to develop personalized medicine to improve patient care.

Many industries use robotic process automation to automate processes like finance systems, which pharma could do as well. Pharma is a heavily regulated industry with many reporting documents generated for clinical trials and for product usage and adverse events. These documents must be translated into many languages. Tech companies have developed auto translation services with the help of AI/ML. Existing auto translation with AI/ML can be enhanced with pharma vocabulary to cut down on the cost of translating documents into multiple languages and increase the speed of this type of work.

Yilian Yuan leads a team of data scientists, statisticians and research experts to help clients address a broad range of business and industry issues. Dr. Yuan has an extensive background in applying econometric and statistical modeling, predictive modeling and machine learning, discrete choice modeling and quantitative market research, combined with patient-level longitudinal data to provide actionable insights for pharma clients to improve business performance.

Original post:

IQVIA on the adoption of AI and machine learning - OutSourcing-Pharma.com

Bing: To Use Machine Learning; You Have To Be Okay With It Not Being Perfect – Search Engine Roundtable

Frdric Dubut of Bing said on Twitter that if you do use machine learning in production, like Bing does, you have to be okay with the results not being perfect. And not all applications are okay with not being perfect, but I assume search can be okay with it.

Frdric Dubut wrote "To use ML in production you also need to be comfortable with a model that *will* get it wrong occasionally." "There are many applications where the required precision is just 100% - and unfortunately quite a few of these are still using ML," he added. These applications that have to be perfect are often in the financial, health and other areas. Think about the software that helps the airplane fly or software that helps you transfer money from on place to another.

Here are the tweets within the context:

As you know, Bing uses a lot of machine learning in search - upwards of 90% or more.

Forum discussion at Twitter.

Read the rest here:

Bing: To Use Machine Learning; You Have To Be Okay With It Not Being Perfect - Search Engine Roundtable