Synopsys and SiMa.ai Collaborate to Bring Machine Learning Inference at Scale to the Embedded Edge – AiThority

Engagement Leverages Synopsys DesignWare IP, Verification Continuum, and Fusion Design Solutions to Accelerate Development of SiMa.ai MLSoC Platform

Synopsys, Inc.announced its collaboration with SiMa.ai to bring its machine learning inference at scale to the embedded edge. Through this engagement, SiMa.ai has adopted key products from SynopsysDesignWare IP,Verification Continuum Platform, andFusion Design Platformfor the development of their MLSoC, a purpose-built machine-learning platform targeted at specialized computer vision applications, such as autonomous driving, surveillance, and robotics.

Recommended AI News: Medical Knowledge Group Continues Growth With Acquisiton Of Magnolia Innovation To Provide Expanded Services To Biopharmaceutical Industry

SiMa.ai selected Synopsys due to its expertise in functional safety, complete set of proven solutions and models, and silicon-proven IP portfolio that will help SiMa.ai deliver high-performance computing at the lowest power. With Synopsys automotive-grade solutions, SiMa.ai can accelerate their SoC-level ISO 26262 functional safety assessments and qualification while achieving their target ASILs.

Working closely with top-tier customers, we have developed a software-centric architecture that delivers high-performance machine learning at the lowest power. Our purpose-built, highly integrated MLSoC supports legacy compute along with industry-leading machine learning to deliver more than 30x better compute-power efficiency, compared to industry alternatives, said Krishna Rangasayee, founder and CEO, at SiMa.ai. We are delighted to collaborate with Synopsys towards our common goal to bring high-performance machine learning to the embedded edge. Leveraging Synopsys industry-leading portfolio of IP, verification, and design platforms enables us to reduce development risk and accelerate the design and verification process.

Recommended AI News: Building A Private Database-As-A-Service Is Emerging As A Prime Alternative To Managed Cloud Databases

We are pleased to support SiMa.ai as it brings MLSoC chip to market, saidManoj Gandhi, general manager of the Verification Group at Synopsys. Our collaboration aims to address SiMa.ais mission to enable customers to build low-power, high-performance machine learning solutions at the embedded edge across a diverse set of industries.

Since SiMa.ais inception it has strategically collaborated with Synopsys to support all aspects of their MLSoC architecture design and verification.

Recommended AI News: NEC Selects NXP RF Airfast Multi-Chip Modules For Massive MIMO 5G Antenna Radio Unit For Rakuten Mobile In Japan

View original post here:
Synopsys and SiMa.ai Collaborate to Bring Machine Learning Inference at Scale to the Embedded Edge - AiThority

Supporting Content Decision Makers With Machine Learning Machine Learning Times – The Predictive Analytics Times

By: Melody Dye, Chaitanya Ekanadham, Avneesh Saluja, Ashish RastogiOriginally published in The Netflix Tech Blog, Dec 10, 2020.

Netflix is pioneering content creation at an unprecedented scale. Our catalog of thousands of films and series caters to 195M+ members in over 190 countries who span a broad and diverse range of tastes. Content, marketing, and studio production executives make the key decisions that aspire to maximize each series or films potential to bring joy to our subscribers as it progresses from pitch to play on our service. Our job is to support them.

The commissioning of a series or film, which we refer to as a title, is a creative decision. Executives consider many factors including narrative quality, relation to the current societal context or zeitgeist, creative talent relationships, and audience composition and size, to name a few. The stakes are high (content is expensive!) as is the uncertainty of the outcome (it is difficult to predict which shows or films will become hits). To mitigate this uncertainty, executives throughout the entertainment industry have always consulted historical data to help characterize the potential audience of a title using comparable titles, if they exist. Two key questions in this endeavor are:

The increasing vastness and diversity of what our members are watching make answering these questions particularly challenging using conventional methods, which draw on a limited set of comparable titles and their respective performance metrics (e.g., box office, Nielsen ratings). This challenge is also an opportunity. In this post we explore how machine learning and statistical modeling can aid creative decision makers in tackling these questions at a global scale. The key advantage of these techniques is twofold. First, they draw on a much wider range of historical titles (spanning global as well as niche audiences). Second, they leverage each historical title more effectively by isolating the components (e.g., thematic elements) that are relevant for the title in question.

To continue reading this article, click here.

Read more:
Supporting Content Decision Makers With Machine Learning Machine Learning Times - The Predictive Analytics Times

GreenFlux, Eneco eMobility and Royal HaskoningDHV implement smart charging based on machine learning – Green Car Congress

Royal HaskoningDHVs office in the city of Amersfoort, the Netherlands, is the first location in the world where electric vehicles are smart charged using machine learning. The charging stations are managed by the charging point operator Eneco eMobility, with smart charging technology provided by the GreenFlux platform.

With the number of electric vehicles ever increasing, so is the pressure to increase the number of charging stations on office premises. This comes at a cost; electric vehicles require a significant amount of power, which can lead to high investments in the electrical installation. With smart charging these costs can be significantly reduced, by ensuring that not all vehicles charge at the same time.

With the innovation, developed by GreenFlux, deployed by Eneco eMobility and applied at Royal HaskoningDHVs Living Lab Charging Plaza in Amersfoort, the Netherlands, smart charging is now taken to the next level, allowing up to three times more charging stations on a site than with regular smart charging.

The novelty in this solution is that machine learning is used to determine or estimate how charge station sites are wired physicallydata that commonly is incomplete and unreliable. At Royal HaskoningDHV, the algorithm determines over time the topology of how all the three-phase electricity cables are connected to each individual charge station.

Using this topology, the algorithm can optimize between single and three phase charging electric vehicles. Though this may seem like a technicality, it allows up to three times as many charging stations to be installed on the same electrical infrastructure.

Now that this part has been tested and proven, there is so much more we can add. We can use the same technology to, for instance, predict a drivers departure time or how much energy they will need. With these kinds of inputs, we can optimize the charging experience even further.

Lennart Verheijen, head of innovation at GreenFlux

Visit link:
GreenFlux, Eneco eMobility and Royal HaskoningDHV implement smart charging based on machine learning - Green Car Congress

Machine Learning Answers: Facebook Stock Is Down 20% In A Month, What Are The Chances Itll Rebound? – Forbes

BRAZIL - 2020/07/10: In this photo illustration a Facebook logo seen displayed on a smartphone. ... [+] (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)

Facebook stock (NASDAQ: FB) reached an all-time high of almost $305 less than a month ago before a larger sell-off in the technology industry drove the stock price down nearly 20% to its current level of around $250. But will the companys stock continue its downward trajectory over the coming weeks, or is a recovery in the stock imminent?

According to the Trefis Machine Learning Engine, which identifies trends in the companys stock price data since its IPO in May 2012, returns for Facebook stock average a little over 3% in the next one-month (21 trading days) period after experiencing a 20% drop over the previous month (21 trading days). Notably, though, the stock is very likely to underperform the S&P500 over the next month (21 trading days), with an expected excess return of -3% compared to the S&P500.

But how would these numbers change if you are interested in holding Facebook stock for a shorter or a longer time period? You can test the answer and many other combinations on the Trefis Machine Learning Engine to test Facebook stock chances of a rise after a fall. You can test the chance of recovery over different time intervals of a quarter, month, or even just 1 day!

MACHINE LEARNING ENGINE try it yourself:

IFFB stock moved by -5% over 5 trading days,THENover the next 21 trading days, FB stock moves anaverageof 3.2 percent, which implies anexcess returnof 1.7 percent compared to the S&P500.

Trefis

More importantly, there is 62% probability of apositive returnover the next 21 trading days and 53.8% probability of apositive excess returnafter a -5% change over 5 trading days.

Some Fun Scenarios, FAQs & Making Sense of Facebook Stock Movements:

Question 1: Is the average return for Facebook stock higher after a drop?Answer:

Consider two situations,

Case 1: Facebook stock drops by -5% or more in a week

Case 2: Facebook stock rises by 5% or more in a week

Is the average return for Facebook stock higher over the subsequent month after Case 1 or Case 2?

FB stockfares better after Case 2, with an average return of 2.4% over the next month (21 trading days) under Case 1 (where the stock has just suffered a 5% loss over the previous week), versus, an average return of 5.3% for Case 2.

In comparison, the S&P 500 has an average return of 3.1% over the next 21 trading days under Case 1, and an average return of just 0.5% for Case 2 as detailed in our dashboard that details theaverage return for the S&P 500 after a fall or rise.

Try the Trefis machine learning engine above to see for yourself how Facebook stock is likely to behave after any specific gain or loss over a period.

Question 2: Does patience pay?

Answer:

If you buy and hold Facebook stock, the expectation is over time the near term fluctuations will cancel out, and the long-term positive trend will favor you at least if the company is otherwise strong.

Overall, according to data and Trefis machine learning engines calculations, patience absolutely pays for most stocks!

For FB stock, the returns over the next N days after a -5% change over the last 5 trading days is detailed in the table below, along with the returns for the S&P500:

Trefis

Question 3: What about the average return after a rise if you wait for a while?

Answer:

The average return after a rise is understandably lower than a fall as detailed in the previous question. Interestingly, though, if a stock has gained over the last few days, you would do better to avoid short-term bets for most stocks although FB stock appears to be an exception to this general observation.

FBs returns over the next N days after a 5% change over the last 5 trading days is detailed in the table below, along with returns for the S&P 500.

Trefis

Its pretty powerful to test the trend for yourself for Facebook stock by changing the inputs in the charts above.

What if youre looking for a more balanced portfolio? Heres a high quality portfolio to beat the market with over 100% return since 2016, versus 55% for the S&P 500. Comprised of companies with strong revenue growth, healthy profits, lots of cash, and low risk, it has outperformed the broader market year after year consistently

See allTrefis Price EstimatesandDownloadTrefis Datahere

Whats behind Trefis? See How Its Powering New Collaboration and What-Ifs ForCFOs and Finance Teams |Product, R&D, and Marketing Teams

Read more:
Machine Learning Answers: Facebook Stock Is Down 20% In A Month, What Are The Chances Itll Rebound? - Forbes

Amazon unveils its 4th-generation Echo – The Verge

Amazon has announced the fourth-generation version of its main Echo smart speaker, bringing a new spherical design and better sound performance. But the biggest change is a new, on-device speech recognition module that will locally process your audio on the Echo, making your requests faster than ever before.

Another big addition to the new Echo is what Amazons calling the AZ1 Neural Edge silicon module, which will process the audio of your voice requests using local machine learning speech recognition algorithms before sending the command to the cloud. The process promises to save hundreds of milliseconds in response time for your Echo.

Amazon says that the new Echo will combine the features from both the third-gen Echo and the Echo Plus and will feature a built-in Zigbee smart home hub, in addition to working with Amazon Sidewalk, the companys local networking system.

The Echo will be available in three colors: charcoal, chalk, and steel blue, and itll ship on October 22nd. Preorders are already open over at Amazons website.

Go here to read the rest:
Amazon unveils its 4th-generation Echo - The Verge

Comprehensive Analysis On Machine Learning in Education Market Based On Types And Application – Crypto Daily

Dataintelo, one of the worlds leading market research firms has rolled out a new report on Machine Learning in Education market. The report is integrated with crucial insights on the market which will support the clients to make the right business decisions. This research will help both existing and new aspirants for Global Machine Learning in Education Market to figure out and study market needs, market size, and competition. The report provides information about the supply and demand situation, the competitive scenario, and the challenges for market growth, market opportunities, and the threats faced by key players.

The report also includes the impact of the ongoing global crisis i.e. COVID-19 on the Machine Learning in Education market and what the future holds for it. The pandemic of Coronavirus (COVID-19) has landed a major blow to every aspect of life globally. This has lead to various changes in market conditions. The swiftly transforming market scenario and initial and future assessment of the impact are covered in the report.

Request a sample Report of Machine Learning in Education Market: https://dataintelo.com/request-sample/?reportId=69421

The report is fabricated by tracking the market performance since 2015 and is one of the most detailed reports. It also covers data varying according to region and country. The insights in the report are easy to understand and include pictorial representations. These insights are also applicable in real-time scenarios. Components such as market drivers, restraints, challenges, and opportunities for Machine Learning in Education are explained in detail. Since the research team is tracking the data for the market from 2015, therefore any additional data requirement can be easily fulfilled.

The scope of the report has a wide spectrum extending from market scenarios to comparative pricing between major players, cost, and profit of the specified market regions. The numerical data is supported by statistical tools such as SWOT analysis, BCG matrix, SCOT analysis, and PESTLE analysis. The statistics are depicted in a graphical format for a clear picture of facts and figures.

The generated report is strongly based on primary research, interviews with top executives, news sources, and information insiders. Secondary research techniques are utilized for better understanding and clarity for data analysis.

The Machine Learning in Education Market is divided into the following segments to have a better understanding:

By Application:

Intelligent Tutoring SystemsVirtual FacilitatorsContent Delivery SystemsInteractive WebsitesOthers

By Type:

Cloud-BasedOn-Premise

By Geographical Regions:

Ask for Discount on Machine Learning in Education Market Report at: https://dataintelo.com/ask-for-discount/?reportId=69421

The Machine Learning in Education Market industry Analysis and Forecast 20192026 help clients with customized and syndicated reports holding key importance for professionals requiring data and market analytics. The report also calls for market-driven results providing feasibility studies for client requirements. Dataintelo promises qualified and verifiable aspects of market data operating in the real-time scenario. The analytical studies are carried out ensuring client requirements with a thorough understanding of market capacities in the real-time scenario.

Some of the prominent companies that are covered in this report:

Key players, major collaborations, merger & acquisitions along with trending innovation and business policies are reviewed in the report. Following is the list of key players:

IBMMicrosoftGoogleAmazonCognizanPearsonBridge-UDreamBox LearningFishtreeJellynoteQuantum Adaptive Learning

*Note: Additional companies can be included on request

Reasons you should buy this report:

Dataintelo provides attractive discounts that fit your needs. Customization of the reports as per your requirement is also offered. Get in touch with our sales team, who will guarantee you a report that suits your needs.

Customized Report and Inquiry for the Machine Learning in Education Market Report: https://dataintelo.com/enquiry-before-buying/?reportId=69421

About US:

DATAINTELO has set its benchmark in the market research industry by providing syndicated and customized research report to the clients. The database of the company is updated on a daily basis to prompt the clients with the latest trends and in-depth analysis of the industry.

Our pool of database contains various industry verticals that include: IT & Telecom, Food Beverage, Automotive, Healthcare, Chemicals and Energy, Consumer foods, Food and beverages, and many more. Each and every report goes through the proper research methodology, validated from the professionals and analysts to ensure the eminent quality reports.

Contact US:

Name: Alex MathewsPhone No.: +1 909 545 6473Email: [emailprotected] Website: https://dataintelo.com Address: 500 East E Street, Ontario, CA 91764, United States.

Read the rest here:
Comprehensive Analysis On Machine Learning in Education Market Based On Types And Application - Crypto Daily

Is Wide-Spread Use of AI & Machine Intelligence in Manufacturing Still Years Away? – Automation World

According to a new report by PMMI Business Intelligence, artificial intelligence (AI) and machine learning is the area of automation technology with the greatest capacity for expansion. This technology can optimize individual processes and functions of the operation; manage production and maintenance schedules; and, expand and improve the functionality of existing technology such as vision inspection.

While AI is typically aimed at improving operation-wide efficiency, machine learning is directed more toward the actions of individual machines; learning during operation, identifying inefficiencies in areas such as rotation and movement, and then adjusting processes to correct for inefficiencies.

The advantages to be gained through the use of AI and machine learning are significant. One study released by Accenture and Frontier Economics found that by 2035, AI-empowered technology could increase labor productivity by up to 40%, creating an additional $3.8 trillion in direct value added (DVA) to the manufacturing sector.

See it Live at PACK EXPO Connects Nov. 9-13: End-of-Line Automation without Capital Expenditure, by Pearson Packaging Systems. Preview the Showroom Here.

However, only 1% of all manufacturers, both large and small, are currently utilizing some form of AI or machine learning in their operations. Most manufacturers interviewed said that they are trying to gain a better understanding of how to utilize this technology in their operations, and 45% of leading CPGs interviewed predict they will incorporate AI and/or machine learning within ten years.

A plant manager at a private label SME reiterates AI technology is still being explored, stating: We are only now talking about how to use AI and predict it will impact nearly half of our lines in the next 10 years.

While CPGs forecast that machine learning will gain momentum in the next decade, the near-future applications are likely to come in vision and inspection systems. Manufacturers can utilize both AI and machine learning in tandem, such as deploying sensors to key areas of the operation to gather continuous, real-time data on efficiency, which can then be analyzed by an AI program to identify potential tweaks and adjustments to improve the overall process.

See it Live at PACK EXPO Connects Nov. 9-13: Reduce costs and improve product quality in adhesive application of primary packaging, by Robatech USA Inc. Preview the Showroom Here.

And, the report states, that while these may appear to be expensive investments best left for the future, these technologies are increasingly affordable and offer solutions that can bring measurable efficiencies to smart manufacturing. In the days of COVID-19, gains to labor productivity and operational efficiency may be even more timely.

To access this FREE report and learn more about automation in operations, download below.

Source: PMMI Business Intelligence, Automation Timeline: The Drive Toward 4.0 Connectivity in Packaging and Processing

PACK EXPO Connects November 9-13. Now more than ever, packaging and processing professionals need solutions for a rapidly changing world, and the power of the PACK EXPO brand delivers the decision makers you need to reach. Attendeeregistrationis open now.

The rest is here:
Is Wide-Spread Use of AI & Machine Intelligence in Manufacturing Still Years Away? - Automation World

Machine Learning in Medical Imaging Market Incredible Possibilities, Growth Analysis and Forecast To 2025 – The Daily Chronicle

Overview Of Machine Learning in Medical Imaging Industry 2020-2025:

This has brought along several changes in This report also covers the impact of COVID-19 on the global market.

The Machine Learning in Medical Imaging Market analysis summary by Reports Insights is a thorough study of the current trends leading to this vertical trend in various regions. Research summarizes important details related to market share, market size, applications, statistics and sales. In addition, this study emphasizes thorough competition analysis on market prospects, especially growth strategies that market experts claim.

Machine Learning in Medical Imaging Market competition by top manufacturers as follow: , Zebra, Arterys, Aidoc, MaxQ AI, Google, Tencent, Alibaba,

Get a Sample PDF copy of the report @ https://reportsinsights.com/sample/13318

The global Machine Learning in Medical Imaging market has been segmented on the basis of technology, product type, application, distribution channel, end-user, and industry vertical, along with the geography, delivering valuable insights.

The Type Coverage in the Market are: Supervised LearningUnsupervised LearningReinforced Leaning

Market Segment by Applications, covers:BreastLungNeurologyCardiovascularLiverOthers

Market segment by Regions/Countries, this report coversNorth AmericaEuropeChinaRest of Asia PacificCentral & South AmericaMiddle East & Africa

Major factors covered in the report:

To get this report at a profitable rate.: https://reportsinsights.com/discount/13318

The analysis objectives of the report are:

Access full Report Description, TOC, Table of Figure, Chart, [emailprotected] https://reportsinsights.com/industry-forecast/Machine-Learning-in-Medical-Imaging-Market-13318

About US:

Reports Insights is the leading research industry that offers contextual and data-centric research services to its customers across the globe. The firm assists its clients to strategize business policies and accomplish sustainable growth in their respective market domain. The industry provides consulting services, syndicated research reports, and customized research reports.

Contact US:

:(US) +1-214-272-0234

:(APAC) +91-7972263819

Email:[emailprotected]

Sales:[emailprotected]

Read the original here:
Machine Learning in Medical Imaging Market Incredible Possibilities, Growth Analysis and Forecast To 2025 - The Daily Chronicle

Proximity matters: Using machine learning and geospatial analytics to reduce COVID-19 exposure risk – Healthcare IT News

Since the earliest days of the COVID-19 pandemic, one of the biggest challenges for health systems has been to gain an understanding of the community spread of this virus and to determine how likely is it that a person walking through the doors of a facility is at a higher risk of being COVID-19 positive.

Without adequate access to testing data, health systems early-on were often forced to rely on individuals to answer questions such as whether they had traveled to certain high-risk regions. Even that unreliable method of assessing risk started becoming meaningless as local community spread took hold.

Parkland Health & Hospital System, the safety net health system for Dallas County, Texas, and PCCI, a Dallas-based non-profit with expertise in the practical applications of advanced data science and social determinants of health, had a better idea.

Community spread of an infectious disease is made possible through physical proximity and density of active carriers and non-infected individuals. Thus, to understand the risk of an individual contracting the disease (exposure risk), it was necessary to assess their proximity to confirmed COVID-19 cases based on their address and population density of those locations.

If an "exposure risk" index could be created, then Parkland could use it to minimize exposure for their patients and health workers and provide targeted educational outreach in highly vulnerable zip codes.

PCCIs data science and clinical team worked diligently in collaboration with the Parkland Informatics team to develop an innovative machine learning driven predictive model called Proximity Index. Proximity Index predicts for an individuals COVID-19 exposure risk, based on their proximity to test positive cases and the population density.

This model was put into action at Parkland through PCCIs cloud-based advanced analytics and machine learning platform called Isthmus. PCCIs machine learning engineering team generated geospatial analysis for the model and, with support from the Parkland IT team, integrated it with their electronic health record system.

Since April 22, Parklands population health team has utilized the Proximity Index for four key system-wide initiatives to triage more than 100,000 patient encounters and to assess needs, proactively:

In the future, PCCI is planning on offering Proximity Index to other organizations in the community schools, employers, etc., as well as to individuals to provide them with a data driven tool to help in decision making around reopening the economy and society in a safe, thoughtful manner.

Many teams across the Parkland family collaborated on this project, including the IT team led by Brett Moran, MD, Senior Vice President, Associate Chief Medical Officer and Chief Medical Information Officer at Parkland Health and Hospital System.

Read the original:
Proximity matters: Using machine learning and geospatial analytics to reduce COVID-19 exposure risk - Healthcare IT News

Riverside Research Welcomes Dr. William Casebeer, Director of Artificial Intelligence and Machine Learning – PRNewswire

Dr. Casebeer's career began with the United States Air Force from which he retired from duty as a Lieutenant Colonel and intelligence analyst in 2011. He brings two decades of experience leading and growing research programs from within the Department of Defense and as a contractor. Dr. Casebeer held leadership roles at Scientific Systems, Beyond Conflict, Lockheed Martin, and Defense Advanced Research Projects Agency (DARPA).

"We are so happy to have Dr. Casebeer join our team," said Dr. Steve Omick, President and CEO. "His wealth of knowledge will be extremely valuable to not only the growth of our research and development in AI/ML but also to our other business units."

As a key member of the company's OIC, Dr. Casebeer will lead the advancement of neuromorphic computing, adversarial artificial intelligence, human-machine teaming, virtual reality for training and insight,and object and activity recognition. He will also pursue and grow opportunities with government research organizations and the intelligence community.

About Riverside Research

Riverside Research is a not-for-profit organization chartered to advance scientific research for the benefit of the US government and in the public interest. Through the company's open innovation concept, it invests in multi-disciplinary research and development and encourages collaboration to accelerate innovation and advance science. Riverside Research conducts independent research in machine learning, trusted and resilient systems, optics and photonics, electromagnetics, plasma physics, and acoustics. Learn more at http://www.riversideresearch.org.

SOURCE Riverside Research

http://www.riversideresearch.org

Originally posted here:
Riverside Research Welcomes Dr. William Casebeer, Director of Artificial Intelligence and Machine Learning - PRNewswire

How do we know AI is ready to be in the wild? Maybe a critic is needed – ZDNet

Mischief can happen when AI is let loose in the world, just like any technology. The examples of AI gone wrong are numerous, the most vivid in recent memory being the disastrously bad performance of Amazon's facial recognition technology, Rekognition, which had a propensity to erroneously match members of some ethnic groups with criminal mugshots to a disproportionate extent.

Given the risk, how can society know if a technology has been adequately refined to a level where it is safe to deploy?

"This is a really good question, and one we are actively working on, "Sergey Levine, assistant professor with the University of California at Berkeley's department of electrical engineering and computer science, told ZDNet by email this week.

Levine and colleagues have been working on an approach to machine learning where the decisions of a software program are subjected to a critique by another algorithm within the same program that acts adversarially. The approach is known as conservative Q-Learning, and it was described in a paper posted on the arXiv preprint server last month.

ZDNet reached out to Levine this week after he posted an essay on Medium describing the problem of how to safely train AI systems to make real-world decisions.

Levine has spent years at Berkeley's robotic artificial intelligence and learning lab developing AI software that to direct how a robotic arm moves within carefully designed experiments-- carefully designed because you don't want something to get out of control when a robotic arm can do actual, physical damage.

Robotics often relies on a form of machine learning called reinforcement learning. Reinforcement learning algorithms are trained by testing the effect of decisions and continually revising a policy of action depending on how well the action affects the state of affairs.

But there's the danger: Do you want a self-driving car to be learning on the road, in real traffic?

In his Medium post, Levine proposes developing "offline" versions of RL. In the offline world, RL could be trained using vast amounts of data, like any conventional supervised learning AI system, to refine the system before it is ever sent out into the world to make decisions.

Also: A Berkeley mash-up of AI approaches promises continuous learning

"An autonomous vehicle could be trained on millions of videos depicting real-world driving," he writes. "An HVAC controller could be trained using logged data from every single building in which that HVAC system was ever deployed."

To boost the value of reinforcement learning, Levine proposes moving from the strictly "online" scenario, exemplified by the diagram on the right, to an "offline" period of training, whereby algorithms are input with masses of labeled data more like traditional supervised machine learning.

Levine uses the analogy of childhood development. Children receive many more signals from the environment than just the immediate results of actions.

"In the first few years of your life, your brain processed a broad array of sights, sounds, smells, and motor commands that rival the size and diversity of the largest datasets used in machine learning," Levine writes.

Which comes back to the original question, to wit, after all that offline development, how does one know when an RL program is sufficiently refined to go "online," to be used in the real world?

That's where conservative Q-learning comes in. Conservative Q-learning builds on the widely studied Q-learning, which is itself a form of reinforcement learning. The idea is to "provide theoretical guarantees on the performance of policies learned via offline RL," Levine explained to ZDNet. Those guarantees will block the RL system from carrying out bad decisions.

Imagine you had a long, long history kept in persistent memory of what actions are good actions that prevent chaos. And imagine your AI algorithm had to develop decisions that didn't violate that long collective memory.

"This seems like a promising path for us toward methods with safety and reliability guarantees in offline RL," says UC Berkeley assistant professor Sergey Levine, of the work he and colleagues are doing with "conservative Q-learning."

In a typical RL system, a value function is computed based on how much a certain choice of action will contribute to reaching a goal. That informs a policy of actions.

In the conservative version, the value function places a higher value on that past data in persistent memory about what should be done. In technical terms, everything a policy wants to do is discounted, so that there's an extra burden of proof to say that the policy has achieved its optimal state.

A struggle ensues, Levine told ZDNet, making an analogy to generative adversarial networks, or GANs, a type of machine learning.

"The value function (critic) 'fights' the policy (actor), trying to assign the actor low values, but assign the data high values." The interplay of the two functions makes the critic better and better at vetoing bad choices. "The actor tries to maximize the critic," is how Levine puts it.

Through the struggle, a consensus emerges within the program. "The result is that the actor only does those things for which the critic 'can't deny' that they are good (because there is too much data that supports the goodness of those actions)."

Also: MIT finally gives a name to the sum of all AI fears

There are still some major areas that need refinement, Levine told ZDNet. The program at the moment has some hyperparameters that have to be designed by hand rather than being arrived at from the data, he noted.

"But so far this seems like a promising path for us toward methods with safety and reliability guarantees in offline RL," said Levine.

In fact, conservative Q-learning suggests there are ways to incorporate practical considerations into the design of AI from the start, rather than waiting till after such systems are built and deployed.

Also: To Catch a Fake: Machine learning sniffs out its own machine-written propaganda

The fact that it is Levine carrying out this inquiry should give the approach of conservative Q-learning added significance. With a firm grounding in real-world applications of robotics, Levine and his team are in a position to validate the actor-critic in direct experiments.

Indeed, the conservative Q-Learning paper, which is lead-authored by Aviral Kumar of Berkeley, and was done with the collaboration of Google Brain, contains numerous examples of robotics tests in which the approach showed improvements over other kinds of offline RL.

There is also a blog post authored by Google if you want to learn more about the effort.

Of course, any system that relies on amassed data offline for its development will be relying on the integrity of that data. A successful critique of the kind Levine envisions will necessarily involve broader questions about where that data comes from, and what parts of it represent good decisions.

Some aspects of what is good and bad may be a discussion society has to have that cannot be automated.

See the article here:
How do we know AI is ready to be in the wild? Maybe a critic is needed - ZDNet

Current and future regulatory landscape for AI and machine learning in the investment management sector – Lexology

On Tuesday this week, Mark Lewis, senior consultant in IT, fintech and outsourcing at Macfarlanes, took part in an event hosted by The Investment Association covering some of the use cases, successes and challenges faced when implementing AI and machine learning (AIML) in the investment management industry.

Mark led the conversation on the current regulatory landscape for AIML and on the future direction of travel for the regulation of AIML in the investment management sector. He identified several challenges posed by the current regulatory framework, including those caused by the lack of a standard definition of AI generally and for regulatory purposes. This creates the risk of a fragmented regulatory landscape (an expression used recently by the World Federation of Exchanges in the context of lack of a standard taxonomy for fintech globally) as different regulators tend to use different definitions of AIML. This results in the risk of over- or under-regulating AIML and is thought to be inhibiting firms adopting new AI systems. While the UK Financial Conduct Authority (FCA) and the Bank of England seem to have settled, at least for now, on a working definition of AI as the use of a machine to perform tasks normally requiring human intelligence, and of ML as a subset of AI where a machine teaches itself to perform tasks without being explicitly programmed these working definitions are too generic to be of serious practical use in approaching regulation.

The current raft of legislation and other regulation that can apply to AI systems is uncertain, vast and complex, particularly within the scope of regulated financial services. Part of the challenge is that, for now, there is very little specific regulation directly applicable to AIML (exceptions include GDPR and, for algorithmic high-frequency trading, MiFID II). The lack of understanding of new AIML systems, combined with an uncertain and complex regulatory environment, also has an impact internally within businesses as they attempt to implement these systems. Those responsible for compliance are reluctant to engage where sufficient evidence is not available on how the systems will operate and how great the compliance burden will be. Improvements in explanations from technologists may go some way to assisting in this area. Overall, this means that regulated firms are concerned that their current systems and governance processes for technology, digitisation and related services deployments remain fit-for-purpose when extended to AIML. They are seeking reassurance from their regulators that this is the case. Firms are also looking for informal, discretionary regulatory advice on specific AIML concerns, such as required disclosures to customers about the use of chatbots.

Aside from the sheer volume of regulation that could apply to AIML development and deployment, there is complexity in the sources of regulation. For example, firms must also have regard to AIML ethics and ethical standards and policies. In this context, Mark noted that, this year, the FCA and The Alan Turing Institute launched a collaboration on transparency and explainability of AI in the UK financial services sector, which will lead to the publication of ethical standards and expectations for firms deploying AIML. He also referred to the role of the UK governments Centre for Data Ethics and Innovation (CDEI) in the UKs regulatory framework for AI and, in particular to the CDEIs AI Barometer Report (June 2020), which has clearly identified several key areas that will most likely require regulatory attention, and some with significant urgency. These include:

In the absence of significant guidance, Mark provided a practical, 10-point, governance plan to assist firms in developing and deploying AI in the current regulatory environment, which is set out below. He highlighted the importance of firms keeping watch on regulatory developments, including what regulators and their representatives say about AI, as this may provide an indication of direction in the absence of formal advice. He also advised that firms ignore ethics considerations at their peril, as these will be central to any regulation going forward. In particular, for the reasons given above, he advised keeping up to date with reports from the CDEI. Other topics discussed in the session included lessons learnt for best practice in the fintech industry and how AI has been used to solve business challenges in financial markets.

See the article here:
Current and future regulatory landscape for AI and machine learning in the investment management sector - Lexology

How Amazon Automated Work and Put Its People to Better Use – Harvard Business Review

Executive Summary

Replacing people with AI may seem tempting, but its also likely a mistake. Amazons hands off the wheel initiative might be a model for how companies can adopt AI to automate repetitive jobs, but keep employees on the payroll by transferring them to more creative roles where they can add more value to the company. Amazons choice to eliminate jobs but retain the workers and move them into new roles allowed the company to be more nimble and find new ways to stay ahead of competitors.

At an automation conference in late 2018, a high-ranking banking official looked up from his buffet plate and stated his objective without hesitation: Im here, he told me, to eliminate full-time employees. I was at the conference becauseafter spending months researching how Amazon automates workat its headquarters,I was eager to learn how other firms thought about this powerful technology. After one short interaction, it was clear that some have it completely wrong.

For the past decade, Amazon has been pushing to automate office work under a program now known as Hands off the Wheel. The purpose was not to eliminate jobs but to automate tasks so that the company could reassign people to build new products to do more with the people on staff, rather than doing the same with fewer people. The strategy appears to have paid off: At a time when its possible to start new businesses faster and cheaper than ever before, Hands off the Wheel has kept Amazon operating nimbly, propelled it ahead of its competitors, and shownthat automating in order to fire can mean missing bigopportunities. As companies look at how to integrate increasingly powerful AI capabilities into their businesses, theyd do well to consider this example.

The animating idea behind Hands off the Wheel originated at Amazons South Lake Union office towers, where the company began automating work in the mid-2010s under an initiative some called Project Yoda. At the time, employees in Amazons retail management division spent their days making deals and working out product promotions as well as determining what items to stock in its warehouses, in what quantities, and for what price. But with two decades worth of retail data at its disposal, Amazons leadership decided to use the force (machine learning) to handle the formulaic processes involved in keeping warehouses stocked. When you have actions that can be predicted over and over again, you dont need people doing them, Neil Ackerman, an ex-Amazon general manager, told me.

The project began in 2012, when Amazon hired Ralf Herbrich as its director of machine learning and made the automation effort one of his launch projects. Getting the software to be goodat inventory management and pricing predictions took years, Herbrich told me, because his team had to account for low-volume product orders that befuddled its data-hungry machine-learning algorithms. By 2015, the teams machine-learning predictions were good enough that Amazons leadership placed them in employees software tools, turning them into a kind of copilot for human workers. But at that point the humans could override the suggestions, and many did, setting back progress.

Eventually, though, automation took hold. It took a few years to slowly roll it out, because there was training to be done, Herbrich said. If the system couldnt make its own decisions, he explained, it couldnt learn. Leadership required employees to automate a large number of tasks, though that varied across divisions. In 2016, my goals for Hands off the Wheel were 80% of all my activity, one ex-employee told me. By 2018 Hands off the Wheel was part of business as usual. Having delivered on his project, Herbrich left the company in 2020.

The transition to Hands off the Wheel wasnt easy. The retail division employees were despondent at first, recognizing that their jobs were transforming. It was a total change, the former employee mentioned above said. Something that you were incentivized to do, now youre being disincentivized to do. Yet in time, many saw the logic. When we heard that ordering was going to be automated by algorithms, on the one hand, its like, OK, whats happening to my job? another former employee, Elaine Kwon, told me. On the other hand, youre also not surprised. Youre like, OK, as a business this makes sense.

Although some companies might have seen an opportunity to reduce head count, Amazon assigned the employees new work. The companys retail division workers largely moved into product and program manager jobs fast-growing roles within Amazon that typically belong to professional inventors. Productmanagers oversee new product development, while program managers oversee groups of projects. People who were doing these mundane repeated tasks are now being freed up to do tasks that are about invention, Jeff Wilke, Amazons departing CEO of Worldwide Consumer, told me. The things that are harder for machines to do.

Had Amazon eliminated those jobs, it would have made its flagship business more profitable but most likely would have caused itself to miss its next new businesses. Instead of automating to milk a single asset, it set out to build new ones. Consider Amazon Go, the companys checkout-free convenience store. Go was founded, in part, by Dilip Kumar, an executive once in charge of the companys pricing and promotions operations. While Kumar spent two years acting as a technical adviser to CEO Jeff Bezos, Amazons machine learning engineers began automating work in his old division, so he took a new lead role in a project aimed at eliminating the most annoying part of shopping in real life: checking out. Kumar helped dream up Go, which is now a pillar of Amazons broader strategy.

If Amazon is any indication, businesses that reassign employees after automating their work will thrive. Those that dont risk falling behind.In shaky economic times, the need for cost-cutting could make it tempting to replace people with machines, but Ill offer a word of warning: Think twice before doing that. Its a message I wish I had shared with the banker.

Read the original here:
How Amazon Automated Work and Put Its People to Better Use - Harvard Business Review

New Optimizely and Amazon Personalize Integration Provides More – AiThority

With experimentation and Amazon Personalize, customers can drive greater customer engagement and revenue

Optimizely, the leader in progressive delivery and experimentation, announced the launch of Optimizely for Amazon Personalize, amachine learning(ML) service from Amazon Web Services (AWS) that makes it easy for companies to create personalized recommendations for their customers at every digital touchpoint. The new integration will enable customers to use experimentation to determine the most effective machine learning algorithms to drive greater customer engagement and revenue.

Recommended AI News: Similarweb Adds New Chief Marketing and Technology Officers

Optimizely for Amazon Personalize enables software teams to A/B test and iterate on different variations of Amazon Personalize models using Optimizelys progressive delivery and experimentation platform. Once a winning model has been determined, users can roll out that model using Optimizelys feature flags without a code deployment. With real-time results and statistical confidence, customers are able to offer more touchpoints powered by Amazon Personalize, and continually monitor and optimize them to further improve those experiences.

Recommended AI News: Polyrize Announces Inaugural Shadow Identity Report

Until now, developers needed to go through a slow and manual process to analyze each machine learning model. Now, with Optimizely for Amazon Personalize, development teams can easily segment and test different models with their customer base and get automated results and statistical reporting on the best performing models. Using the business KPIs with the new statistical reports, developers can now easily roll out the best performing model. With a faster process, users can test and learn more quickly to improve key business metrics and deliver more personalized experiences to their customers.

Successful personalization powered by machine learning is now possible, says Byron Jones, VP of Product and Partnerships at Optimizely. Customers often have multiple Amazon Personalize models they want to use at the same time, and Optimizely can provide the interface to make their API and algorithms come to life. Models need continual tuning and testing. Now, with Optimizely, you can test one Amazon Personalize model against another to iterate and provide optimal real-time personalization and recommendation for users.

Recommended AI News: Suzy Online Shopping Study Says 86% of Consumers Will Shop Online Even Following the Pandemic

Go here to see the original:
New Optimizely and Amazon Personalize Integration Provides More - AiThority

Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its Instant Health Data (IHD) Software – PRNewswire

BOSTON, Sept. 15, 2020 /PRNewswire/ -- Panalgo, a leading healthcare analytics company, today announced the launch of its new Data Sciencemodule for Instant Health Data (IHD), which allows data scientists and researchers to leverage machine-learning to uncover novel insights from the growing volume of healthcare data.

Panalgo's flagship IHD Analytics softwarestreamlines the analytics process by removing complex programming from the equation and allows users to focus on what matters most--turning data into insights. IHD Analytics supports the rapid analysis of a wide range of healthcare data sources, including administrative claims, electronic health records, registry data and more. The software, which is purpose-built for healthcare, includes the most extensive library of customizable algorithms and automates documentation and reporting for transparent, easy collaboration.

Panalgo's new IHD Data Science module is fully integrated with IHD Analytics, and allows for analysis of large, complex healthcare datasets using a wide variety of machine-learning techniques. The IHD Data Science module provides an environment to easily train, validate and test models against multiple datasets.

"Healthcare organizations are increasingly using machine-learning techniques as part of their everyday workflow. Developing datasets and applying machine-learning methods can be quite time-consuming," said Jordan Menzin, Chief Technology Officer of Panalgo. "We created the Data Science module as a way for users to leverage IHD for all of the work necessary to apply the latest machine-learning methods, and to do so using a single system."

"Our new IHD Data Science product release is part of our mission to leverage our deep domain knowledge to build flexible, intuitive software for the healthcare industry," said Joseph Menzin, PhD, Chief Executive Officer of Panalgo. "We are excited to empower our customers to answer their most pressing questions faster, more conveniently, and with higher quality."

The IHD Data Science module provides advanced analytics to better predict patient outcomes, uncover reasons for medication non-adherence, identify diseases earlier, and much more. The results from these analyses can be used by healthcare stakeholders to improve patient care.

Research abstracts using Panalgo's IHD Data Science module are being presented at this week's International Conference on Pharmacoepidemiology and Therapeutic Risk Management, including: "Identifying Comorbidity-based Subtypes of Type 2 Diabetes: An Unsupervised Machine Learning Approach," and "Identifying Predictors of a Composite Cardiovascular Outcome Among Diabetes Patients Using Machine Learning."

About Panalgo Panalgo, formerly BHE, provides software that streamlines healthcare data analytics by removing complex programming from the equation. Our Instant Health Data (IHD) software empowers teams to generate and share trustworthy results faster,enabling more impactful decisions. To learn more, visit us athttps://www.panalgo.com. To request a demo of our IHD software, please contact us at [emailprotected].

SOURCE Panalgo

Home

See the original post here:
Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its Instant Health Data (IHD) Software - PRNewswire

Machine Learning as a Service (MLaaS) Market Industry Trends, Size, Competitive Analysis and Forecast 2028 – The Daily Chronicle

The Global Machine Learning as a Service (MLaaS) Market is anticipated to rise at a considerable rate over the estimated period between 2016 and 2028. The Global Machine Learning as a Service (MLaaS) Market Industry Research Report is an exhaustive study and a detailed examination of the recent scenario of the Global Machine Learning as a Service (MLaaS) industry.

The market study examines the global Machine Learning as a Service (MLaaS) Market by top players/brands, area, type, and the end client. The Machine Learning as a Service (MLaaS) Market analysis likewise examines various factors that are impacting market development and market analysis and discloses insights on key players, market review, most recent patterns, size, and types, with regional analysis and figure.

Click here to get sample of the premium report: https://www.quincemarketinsights.com/request-sample-50032?utm_source= DC/hp

The Machine Learning as a Service (MLaaS) Market analysis offers an outline with an assessment of the market sizes of different segments and countries. The Machine Learning as a Service (MLaaS) Market study is designed to incorporate both quantitative aspects and qualitative analysis of the industry with respect to countries and regions involved in the study. Furthermore, the Machine Learning as a Service (MLaaS) Market analysis also provides thorough information about drivers and restraining factors and the crucial aspects which will enunciate the future growth of the Machine Learning as a Service (MLaaS) Market.

Machine Learning as a Service (MLaaS) Market

The market analysis covers the current global Machine Learning as a Service (MLaaS) Market and outlines the Key players/manufacturers: Microsoft, IBM Corporation, International Business Machine, Amazon Web Services, Google, Bigml, Fico, Hewlett-Packard Enterprise Development, At&T, Fuzzy.ai, Yottamine Analytics, Ersatz Labs, Inc., and Sift Science Inc.

The market study also concentrates on the main leading industry players in the Global Machine Learning as a Service (MLaaS) Market, offering information such as product picture, company profiles, specification, production, capacity, price, revenue, cost, and contact information. This market analysis also focuses on the global Machine Learning as a Service (MLaaS) Market volume, Trend, and value at the regional level, global level, and company level. From a global perspective, this market analysis represents the overall global Machine Learning as a Service (MLaaS) Market Size by analyzing future prospects and historical data.

Get ToC for the overview of the premium report https://www.quincemarketinsights.com/request-toc-50032?utm_source=DC/hp

On the basis of Market Segmentation, the global Machine Learning as a Service (MLaaS) Market is segmented as By Type (Special Services and Management Services), By Organization Size (SMEs and Large Enterprises), By Application (Marketing & Advertising, Fraud Detection & Risk Analytics, Predictive Maintenance, Augmented Reality, Network Analytics, and Automated Traffic Management), By End User (BFSI, IT & Telecom, Automobile, Healthcare, Defense, Retail, Media & Entertainment, and Communication)

Further, the report provides niche insights for a decision about every possible segment, helping in the strategic decision-making process and market size estimation of the Machine Learning as a Service (MLaaS) market on a regional and global basis. Unique research designed for market size estimation and forecast is used for the identification of major companies operating in the market with related developments. The report has an exhaustive scope to cover all the possible segments, helping every stakeholder in the Machine Learning as a Service (MLaaS) market.

Speak to analyst before buying this report https://www.quincemarketinsights.com/enquiry-before-buying-50032?utm_source=DC/hp

This Machine Learning as a Service (MLaaS) Market Analysis Research Report Comprises Answers to the following Queries

ABOUT US:

QMI has the most comprehensive collection of market research products and services available on the web. We deliver reports from virtually all major publications and refresh our list regularly to provide you with immediate online access to the worlds most extensive and up-to-date archive of professional insights into global markets, companies, goods, and patterns.

Contact:

Quince Market Insights

Office No- A109

Pune, Maharashtra 411028

Phone: APAC +91 706 672 4848 / US +1 208 405 2835 / UK +44 1444 39 0986

Email: [emailprotected]

Web: https://www.quincemarketinsights.com

Read the original post:
Machine Learning as a Service (MLaaS) Market Industry Trends, Size, Competitive Analysis and Forecast 2028 - The Daily Chronicle

How A Crazy Idea Changed The Way We Do Machine Learning: Test Of Time Award Winner – Analytics India Magazine

HOGWILD! Wild as it sounds, the paper that goes by the same name was supposed to be an art project by Christopher Re, an associate professor at Stanford AI Lab, and his peers. Little did they know that the paper would change the way we do machine learning. Ten years later, it even bagged the prestigious Test of Time award at the latest NeurIPS conference.

To identify the most impactful paper in the past decade, the conference organisers selected a list of 12 papers published at NeurIPS over the years NeurIPS 2009, NeurIPS 2010, NeurIPS 2011 with the highest numbers of citations since their publication. They also collected data about the recent citations counts for each of these papers by aggregating citations that these papers received in the past two years at NeurIPS, ICML and ICLR. The organisers then asked the whole senior program committee with 64 SACs to vote on up to three of these papers to help in picking an impactful paper.

Most of the machine learning is about finding the right kind of variables for converging towards reasonable predictions. Hogwild! is a method that helps in finding those variables very efficiently. The reason it had such a crazy name, to begin with, was it was intentionally a crazy idea, said Re in an interview for Stanford AI.

With its small memory footprint, robustness against noise, and rapid learning rates, Stochastic Gradient Descent (SGD) has proved to be well suited to data-intensive machine learning tasks. However, SGDs scalability is limited by its inherently sequential nature; it is difficult to parallelise. A decade ago, when the hardware was still playing catch up with the algorithms, the key objective for scalable data analysis, on vast data, is to minimise the overhead caused due to locking. Back then, when parallelisation of SGD was proposed, there was no way around memory locking, which deteriorated the performance. Memory locking was essential to reduce latency for between processes.

Re and his colleagues demonstrated that this work aims to show using novel theoretical analysis, algorithms, and implementation that stochastic gradient descent can be implemented without any locking.

In Hogwild!, the authors made the processors have equal access to shared memory and were able to update individual components of memory at will. The risk here is that a lock-free scheme can fail as processors could overwrite each others progress. However, when the data access is sparse, meaning that individual SGD steps only to modify a small part of the decision variable, we show that memory overwrites are rare and that they introduce barely any error into the computation when they do occur, explained the authors.

When asked about the weird exclamation point at the end of the already weird name I thought the phrase going hog-wild was hysterical to describe what we were trying. So I thought an exclamation point would just make it better, quipped Re.

In spite of being honoured with being a catalyst behind driving ML revolution, Re believes that this change would have happened with or without their paper. What really stands out, according to him, is that an odd-ball, goofy sounding research is recognised even after a decade. This is a testimony to an old adage there is no such thing as a bad idea!

Find the original paper here.

Here are the test of time award winners in the past:

2017: Random Features for Large-Scale Kernel Machines by Ali Rahimi and Ben Recht

2018: The Trade-Offs of Large Scale Learning by Lon Bottou

2019: Dual Averaging Method for Regularized Stochastic Learning and Online Optimisation by Lin Xiao

I have a master's degree in Robotics and I write about machine learning advancements.email:ram.sagar@analyticsindiamag.com

Here is the original post:
How A Crazy Idea Changed The Way We Do Machine Learning: Test Of Time Award Winner - Analytics India Magazine

Northwell Health researchers using Facebook data and AI to spot early stages of severe psychiatric illness – FierceHealthcare

After going missing for three days in 2016, Christian Herrera Gaton of Jackson Heights, New York, was diagnosed with bipolar disorder type 1.

His experiences with bipolar disorder include mood swings, depression and manic episodes. During a recent bout with the illness, he was admitted to Zucker Hillside Hospital in August 2020 due to some stress he was feeling from the COVID-19 pandemic.

While at Zucker for treatment, the Feinstein Institutes for Medical Research, the research arm of New Yorks Northwell Health, approached him to join a study about Facebook data and psychiatric conditions.

The goal of the study was to use machine learning algorithms to predict a patients psychiatric diagnosis more than a year in advance of an official diagnosis and stay in the hospital.

Michael Birnbaum, M.D., assistant professor at Feinstein Institutes Institute of Behavioral Science, saw an opportunity to use the social media platforms that are a part of everyday life to gain insights into the early stages of severe psychiatric illness.

There was an interest in harnessing these ubiquitous, widely used platforms in understanding how we could improve the work that we do, Birnbaum said in an interview. We wanted to know what we can learn from the digital universe and all of the data that's being created and uploaded by the young folks that we treat. That's what motivated our interest.

RELATED:Brigham and Women's taps mental health startup to use AI to track providers' stress

After Gaton, a former student at John Jay College of Criminal Justice, was discharged from the hospital, he shared almost 10 years of Facebook and Instagram data with the Feinstein Institutes. He uploaded an archive that contained pictures, private messages and basic user information.

It's been a difficult experience to deal with [COVID] and to go through everything with the hospitals and losing friends because of doing stupid things during manic episodes, Gaton told Fierce Healthcare. It's not easy, but at least I get to join this research study and help other people.

The study, conducted along with IBM Research, looked at patients with schizophrenia spectrum disorders and mood disorders. Feinstein Institutes researchers handled the participant recruitment and assessments as well as data collection and analysis. Meanwhile, IBM developed the machine learning algorithms that researchers used to analyze Facebook data.

Results of thestudy, called Identifying signals associated with psychiatric illness utilizing language and images posted to Facebook, was published Dec. 3 in Nature Partner Journals (npj) Schizophrenia.

Feinstein Institutes and IBM researchers studied archives of people in an early treatment program to extract meaning from the data to gain an understanding of how people with mental illness use social media.

Essentially, at its core, the machine learns to predict which group an individual belongs to, based on data that we feed it, Birnbaum explained. So, for example, if we show the computer a Facebook post and then we say to the computer, based on what you've learned so far and based on the patterns that you recognize, does this post belong to an individual with schizophrenia or bipolar disorder? Then the computer makes a prediction.

Birnbaum added that the greater the predictions and accuracy, the more effective the algorithms are at predicting which characteristics belong to which group of people.

Feinstein and IBM took care to anonymize the social media data, according to Birnbaum. They stripped out names and addresses from written posts. Words essentially using language-analytic software become vectors, Birnbaum said. The actual content of the sentences, once they're parsed through the software, often becomes meaningless.

In addition, the machine learning software does not analyze participants images closely. Instead, it focuses on shape, size, height, contrast and colors, Birnbaum said.

We did our best to ensure that we identified the data to the extent possible and ensured the confidentiality of our participants because that's one of our top priorities, of course, Birnbaum said.

The study analyzed Facebook data for the 18 months prior to help predict a patients diagnosis or hospitalization a year in advance.

Researchers used machine learning algorithms to study 3.4 million Facebook messages and 142,390 images for 223 participants for up to 18 months before their first psychiatrichospitalization. Study subjects with schizophrenia spectrum disorders and mood disorders were more prone to discuss anger, swearing, sex and negative emotions in their Facebook posts, according to the study.

RELATED:Northwell Health research arm develops AI tool to help hospital patients avoid sleepless nights

Birnbaum sees an opportunity to use the data from social media platforms to gain insights to deliver better healthcare. By using social media, such as analyzing Facebook status updates, researchers can gain insights on personality traits, demographics, political views and substance use.

Harnessing social media platforms could be a significant step forward for psychiatry, which is limited by its reliance on mostly retrospective, self-reported data, the study stated.

Gaton believes that he could have avoided time in the hospital if he received an earlier diagnosis. Like with other subjects in the study, Gaton can sense the warning signs of an episode when he starts to post differently on Facebook.

From analyzing the data, researchers were able to study who would use more swear words compared with healthy volunteers. Some participants would use words related to blood, pain or biological processes. As their conditions progressed and patients neared hospitalization, they would use more punctuation and negative emotional words in their Facebook posts, according to the study.

Other organizations are also turning to artificial intelligence to monitor mental health. Researchers at Brigham and Women's Hospital are using AI technology from startup Rose to monitor the mental well-being of front-line workers during the COVID-19 pandemic. Meanwhile, the Feinstein Institutes recently developed an AI tool that can help patients get better sleep in the hospital.

Researchers see a use for social media data for patients that could be similar to the vital data they pull from a blood or urine sample, according to Birnbaum. I could imagine a world where people go see their psychiatrists and provide their archives in the same way they provide a blood test, which is then analyzed much like a blood test and is used to inform clinical decision-making moving forward, he said.

RELATED:The unexpected ways AI is impacting the delivery of care, including for COVID-19

I think that is where psychiatry is heading, and social media will play a component of a much larger, broader digital representation of behavioral health.

Guillermo Cecchi, principal research staff member, computational psychiatry, at IBM Research, also sees a use for social media data as a common way to evaluate patients.

Our vision is that this type of technology could one day be used in a non-burdensome way, with patient consent and high privacy standards, to provide clinicians with the most comprehensive and relevant information to make treatment decisions, including regular clinical assessments, biomarkers and a patients medical history, Cecchi told Fierce Healthcare.

Researchers hope that the Facebook data can inform future studies.

Ultimately, the language markers we identified with AI in this study could be used to inform future work, shaped with rigorous ethical frameworks, that could help clinicians to monitor the progression of mental health patients considered at-risk for relapse or undergoing treatment, Cecchi said.

Gaton said he would like to see the technology get more accurate. I just hope that with my contributions to the study, the technology gets more accurate and more responsive and can be something that doctors can use in the near futurewith patient consent, of course, he said.

Read the original here:
Northwell Health researchers using Facebook data and AI to spot early stages of severe psychiatric illness - FierceHealthcare

Machine Learning as a Service (MLaaS) Market 2021: Big Things are Happening in Development and Future Assessment by 2031 – Digital Journal

Pune, Maharashtra, India, December 17 2021 (Wiredrelease) Prudour Pvt. Ltd :High Use Of Machine Learning as a Service (MLaaS) Market|Better Business Growth, A One-Stop Guide For Growing Business In 2021

The Machine Learning as a Service (MLaaS) Market economy has improved over the last few years. There have been more entrants and technological advancement, as well as a growing rate of expansion due to the measures taken against short-term economic downturns. This report has been based on a few different types of research. The findings have been obtained from both primary and secondary tools for gathering data. The study is a perfect blend of qualitative and quantifiable information, highlighting key market developments as well industry challenges in gap analysis with new opportunities that could be trending. A variety of graphical presentation techniques are used to demonstrate the facts.

The report provides a comprehensive description of Machine Learning as a Service (MLaaS) market that presents an overview of the global market. The information in this document includes a forecast (2021-2031), trends drivers both current and future as good opinions from industry professionals on these topics with technological advancements and new entry explorations, many people are looking for economic countermeasures to increase their growth rates. The competitive nature of the industry is forcing key players to focus on new merger and acquisition methods in order to maintain their power over market share.

Looking for customized insights to raise your business for the future, ask for a sample report here:https://market.us/report/machine-learning-as-a-service-mlaas-market/request-sample/

The influential players covered in this report are:

GoogleIBM CorporationMicrosoft CorporationAmazon Web ServicesBigMLFICOYottamine AnalyticsErsatz LabsPredictron LabsH2O.aiAT and TSift Science

Figure:

Topographical segmentation of Machine Learning as a Service (MLaaS) market by top product type, best application, and key region:

Segmentation by Type:

Software ToolsCloud and Web-based Application Programming Interface (APIs)Other

Segmentation by Application:

ManufacturingRetailHealthcare and Life SciencesTelecomBFSIOther (Energy and Utilities, Education, Government)

Machine Learning as a Service (MLaaS) Market: Regional Segment Analysis

North America (USA, Canada, and Mexico)

Europe (Russia, France, Germany, UK, and Italy)

Asia-Pacific (China Korea, India, Japan, and Southeast Asia)

South America (Brazil, Columbia, Argentina, etc)

The Middle East and Africa (Nigeria, UAE, Saudi Arabia, Egypt, and South Africa)

Place An Inquiry Before Investment (Use Corporate Details Only):https://market.us/report/machine-learning-as-a-service-mlaas-market/#inquiry

The main features on the report of 2021 Global Machine Learning as a Service (MLaaS) Market:

The latest mechanical enhancements and Machine Learning as a Service (MLaaS) new releases to engage our consumers to produce, settle on instructed business decisions, and build their future expected achievements.

Machine Learning as a Service (MLaaS) market focuses more on future methodology changes, current business and progressions and open entryways for the global market.

The investment return analysis, SWOT analysis, and feasibility study are also used for Machine Learning as a Service (MLaaS) market data analysis.

Key Highlights of the Machine Learning as a Service (MLaaS) Market Research Report:

1. The report summarizes the machine learning as a service (mlaas) Market by stating the basic product definition, the number of product applications, product scope, product cost and price, supply and demand ratio, market overview.

2. Competitive landscape of all leading key players along with their business strategies, approaches, and latest machine learning as a service (mlaas) market movements.

3. It elements market feasibility investment, opportunities, the growth factors, restraints, market risks, and machine learning as a service (mlaas) business driving forces.

4. It performs a comprehensive study of emerging players of machine learning as a service (mlaas) business along with the existing ones.

5. It accomplishes primary and secondary research and resources to estimate top products, market size, and industrial partnerships of machine learning as a service (mlaas) business.

6. Global Machine Learning as a Service (MLaaS) market report ends by articulating research findings, data sources, results, list of dealers, sales channels, businesses and distributors along with an appendix.

Need More Information aboutMachine Learning as a Service (MLaaS) market:https://market.us/report/machine-learning-as-a-service-mlaas-market/

Key questions include:

1. What can we estimate about the anticipated growth rates and also the global machine learning as a service (mlaas) industry size by 2031?

2. Who investors will use the specifics of our research, as well as some key parameters and forecast periods to guide their investment decisions?

3. What will happen in the coming existing and emerging markets?

4. All those vendors who make a profit; some do not.

5. What would be the upcoming machine learning as a service (mlaas) market behavior forecast with trends, challenges, and drivers challenges for development?

6. What industry opportunities and dangers are faced by vendors in the market?

7. Which would be machine learning as a service (mlaas) industry opportunities and challenges faced with most vendors in the market?

8. What are the variables affecting the machine learning as a service (mlaas) market share?

9. What will be the outcomes of this market SWOT five forces analysis?

Our trusted press-release media partner @https://www.taiwannews.com.tw/en/search?keyword=market.us

Get in Touch with Us :

Mr. Lawrence John

Market.us (Powered By Prudour Pvt. Ltd.)

Send Email:inquiry@market.us

Address:420 Lexington Avenue, Suite 300 New York City, NY 10170, United States

Tel:+1 718 618 4351

Website:https://market.us

Blog:https://techmarketreports.com/

Scrutinize More Reports Here:

Medical Swab Market In-depth Assessment, Crucial Trend, Industry Drivers, Future Projection by 2031

Automotive Aluminum Wheel Market Comprehensive Research Study, Strategic Planning, Competitive Landscape and Forecast to 2031

UV-cured Acrylic Adhesive Tapes Market 2021 Vital Challenges and Forecast Analysis by 2031

Gum Arabic (E414) Market Crucial Aspects of the Industry by Segments to 2031

Melt Pump Market Growth Factors, Regional Overview, Competitive Strategies and Forecast up to 2031

Bi-Metal Band Saw Blade Market PESTEL Analysis, SWOT Analysis, CAGR and Value Chain Study Forecast to 2031

This content has been published by Prudour Pvt. Ltd company. The WiredRelease News Department was not involved in the creation of this content. For press release service enquiry, please reach us at contact@wiredrelease.com.

Link:
Machine Learning as a Service (MLaaS) Market 2021: Big Things are Happening in Development and Future Assessment by 2031 - Digital Journal

Apple’s SVP of Machine Learning & AI John Giannandrea has been assigned to Oversee Apple’s Secretive ‘Project Titan’ – Patently Apple

Patently Apple has been covering the latest Project Titan patents for years, including a granted patent report posted this morning covering another side of LiDAR that was never covered before. While some in the industry have doubted Apple will ever do anything with this project, Apple has now reportedly moved its self-driving car unit under the leadership of top artificial intelligence executive John Giannandrea, who will oversee the companys continued work on an autonomous system that could eventually be used in its own car.

Bloomberg's Mark Gurman is reporting today that Project Titan is run day-to-day by Doug Field. His team of hundreds of engineers have moved to Giannandreas artificial intelligence and machine-learning group, according to people familiar with the change.

Previously, Field reported to Bob Mansfield, Apples former senior vice president of hardware engineering. Mansfield has now fully retired from Apple, leading to Giannandrea taking over. Mansfield oversaw a shift from the development of a car to just the underlying autonomous system.

In 2017, Patently Apple posted a report titled "Apple's CEO Confirms Project Titan is the 'Mother of all AI Projects' Focused on Self-Driving Vehicles." For more read the full Bloomberg report.

Like with all major Apple projects, be it for a head-mounted display device, smartglasses, folding devices, Apple keeps its secrets and prototypes under wraps until they've holistically worked out their roadmap.

That's why following Apple's patents is the best way to keep on top of the technology that Apple's engineers are actually working on in some capacity within the various ongoing projects. Review our Project Titan patent archive to see what Apple has been working on.

Continue reading here:
Apple's SVP of Machine Learning & AI John Giannandrea has been assigned to Oversee Apple's Secretive 'Project Titan' - Patently Apple