H20.ai CEO on using intelligence and teamwork to respond to a crisis – Diginomica

(Image sourced via H20.ai)

Today, the enterprise faces not just the immediate, overwhelming issue of COVID-19, but multiple challenges. If you're a bank, every day you need better answers to mortgage lending, credit risk scores, fraud/fraud detection, anti-money laundering; if a retailer, you're always having to think about how you'll find your next best customer, what your next best offer is, how to attract new customers? How do you understand the patterns of your customers, and so on.

The latest tool in the enterprise armory to do this is supposed to be Machine Learning (ML), which is really what 99.9% of what's meant when we say "Artificial Intelligence" right now. The promise is that Machine Learning is an additional set of capabilities that should give businesses in every industry the ability to garner better intelligence on trends with their data in their possession.

The problem: doing ML right is hard. You need deep mathematical capability, as well as lots of data to work over-and not every company has a bank of trained data scientists ready to be unleashed on a problem. This is where H20.ai, a leading ML company, is trying to make a play by a Freemium model of ML access, effectively making it very easy for "any" company to get up to speed quickly with the approach. Simply put, it's a maths/stats package that gives you a quick on-ramp to do "automated" Machine Learning.

That means that even if you haven't heard of it, you may already be using it somewhere in your organisation; H2O is used by 20,000 separate entities and by hundreds of thousands of individual data scientists. Analysts think its Open Source route to market is valid, with Gartner telling clients it has the "Strongest Completeness of Vision" in its 2020 Data Science and Machine Learning Magic Quadrant.

H2O has impressive penetration in financial services/insurance, healthcare, telco, retail, pharmaceutical and marketing. H2O-based ML is being used in managing claims, detecting fraud, improving clinical workflows and predicting hospital acquired infections. On the paid-for version, commercial customers of include brands like Wells Fargo, Capital One, Kaiser Permanente and Nationwide Insurance, do similar things; we're talking sifting large datasets to spot anomalies and patterns to do credit scoring and investing better, essentially, as well as supply chain optimisation and, right now, a lot of COVID-19 response planning.

Getting to 20,0000 user organisations in only eight years since you started offering payroll is progress that's attracted the interest of VCs, with the company successfully raising north of $72m last year. The company claims that it's really the main play in automated Open Source Machine Learning platforms, but in the corporate space there are two main competitors Dataiku and DataRobot.

What's particularly interesting about H20 is that its users are investors, according to its CEO, Sri Ambati, who told us many of the financial firms who use the company's offering are also "very strong believers in what we do" in more tangible ways (including Goldman Sachs). As Ambati told diginomica,

What we're trying to do is democratice AI and make sure very high-grade Machine Learning and the best mathematical algorithms are there for you to build your own data science capability from scratch. 250,000 such data scientists use us every day, already.

Fine, but quite a few CIOs and CEOS remain sceptical about the "AI" bubble, which surely has to burst (again) soon. For Ambati:

There's a lot of AI hype, yes. But some sort of intelligence is super-important for any business leader trying to react to the crises that we seem to be experiencing one after another, from Brexit to COVID; they used to be like 9/11, and be once a decade, now they're every year.

We can help you spot the patterns before they become widely visible. But you still have to have the courage to act on that intelligence and take the decisive step, of course. But there are always three people that need to come together to make that happen: AI is a team sport.

The CIO can put AI in the enterprise and ensure the software fits within the specifications of the enterprise, can ingest the data, and the team has the data ready to go-but you need a data science person or a savvy business analyst to actually run it. All those people have to all be in agreement that this is the way forward.

The obvious place where such decisive steps need to be taken will be getting set for Recovery, post-COVID-19, of course. This is where H20 believes Machine Learning will really come into its own:

The big thing right now in retail is distribution, right? So as stores come back online, they will need to determine their supply chain, their inventory, but even so they're trying to figure out customer patterns. AI can detect those patterns better than humans. It can help you by saying Serve this up to Sri,' for example, "or this next offer to Derek because he's likely to take that offer and buy something.'

So no, I don't think AI will go out of fashion. In fact, now more so than ever, companies are calling us because of Coronavirus and the impact on their business and how we can help them. What we see as the next logical step of a digital transformation is an AI transformation, in fact.

Ambati's clearly all about the quality of the work, and less so about the big IPO cash-out. After all, this is a guy who's teaching his young daughters Python at home on Saturday mornings and explaining logarithmic scales to them so they understand what's going on with Coronavirus; he's still a programmer at heart, and one still in love, as they say at the Stanford d.School, with "the problem and not the solution". And as he says, the people who come to work for him seem to be cut from the same cloth, too; he claims the "world's best physicists, fastest compiler writers and best data scientists" are rocking up at 2307 Leghorn St, and we have no reason to doubt him.

Read more:
H20.ai CEO on using intelligence and teamwork to respond to a crisis - Diginomica

The ML Expert Who Hated Mathematics: Interview With Dipanjan Sarkar – Analytics India Magazine

Every week, Analytics India Magazine reaches out to developers, practitioners and experts from the machine learning community to gain insights into their journey in data science, and the tools and skills essential for their day-to-day operations.

For this weeks column, Analytics India Magazine got in touch with Dipanjan Sarkar, a very well known face in the machine learning community. In this story, we take you through the journey of Dipanjan and how he became an ML expert.

Dipanjan currently works as a Data Science Lead at Applied Materials where he leads a team of data scientists to solve various problems in the manufacturing and semiconductor domain by leveraging machine learning, deep learning, computer vision and natural language processing. He provides the much needed technical expertise, AI strategy, solutioning, and architecture, and works with stakeholders globally.

He has a bachelors degree in computer science & engineering and a masters in data science from IIIT Bangalore. Currently, he is pursuing a PG Diploma in ML and AI from Columbia University and an executive education certification course in AI Strategy from Northwestern University Kellogg School of Management.

Apart from academia, Dipanjan is a big fan of MOOCs. He also beta-test new courses for Coursera before they are made public.

Dipanjan is also a Google Developer Expert in Machine Learning and has worked with several Fortune 500 companies. For an expert in ML, mathematics is a prerequisite, but we were surprised when we learnt that Dipanjan actually hated mathematics at school and this continued until ninth grade where he picked up statistics, linear algebra and calculus, the three pillars of machine learning.

I always loved the way you could program a computer to do specific tasks and make a machine actually learn with data!

Dipanjans renewed interest in mathematics was followed by his fascination for computer programming. With his growing fascination from mathematics to statistics and traditional computer programming, his career choice became almost obvious.

Reminiscing about his initial days, when the word data science wasnt worshipped yet, Dipanjan spoke about how the field was more conceptual and theoretical. Back then, there werent any active ecosystems of tools, languages and frameworks dedicated for data science. Hence, it took more time to learn theoretical concepts since it took more efforts to actually implement them or see them in practice.

With the advent of Python, R and a whole suite of tools and libraries, he believes that it has become easier to tame the learning curve of data science. However, he also warns that this can be a double-edged sword if one focuses on hands-on without deep-diving into the math and concepts behind algorithms and techniques to understand how it works or why it is used.

I have always been a strong advocate of self-learning, and I believe that is where you get maximum value

Due to the lack of mentors or proper guides, which are plenty nowadays on LinkedIn and other forums, Dipanjan had no other option than to self-learn with the help of the web and books.

For aspirants, he recommends the following books:

To dive deep into the concepts and to get hands-on, he recommends Deep Learning with Keras, Python Machine Learning and Hands-On Machine Learning as practical books with examples. Dipanjan has also written a handful of books on practical machine learning.

When it comes to practice and deploying ML models, Dipanjan extensively uses the CRISP-DM (cross industry process for data mining) framework, which he considers to be one of the best frameworks to tackle any data science problem.

Also, before diving into models or data, he insists on the importance of identifying and articulating the business problem in the right manner. For conceptualising an AI use-case, Dipanjan recommends something called AI Canvas, which he has learnt from the Kellogg School of Management:

Use the right tools for the job without waging wars of Python vs R or PyTorch vs TensorFlow

When asked about his favourite tools, Dipanjan explained the importance of not paying attention towards Python vs R or PyTorch vs TensorFlow and using the right tools that get the job done.

For instance, he and his team use the ecosystem of tools and libraries centered around Python very frequently. This includes the regular run-of-the-mill pandas, matplotlib, seaborn, plotly for data wrangling and exploratory data analysis. For statistical modelling he prefers libraries like scikit-learn, statsmodels and pyod.

Dipanjans toolkit looks as follows:

Along with picking the right tools, he recommends practitioners to always go with the simplest solution unless complexity is adding substantial value and last but not the least, he urges people not to ignore documentation.

To those looking to break into the world of data science, Dipanjan suggests one to follow a hybrid approach, i.e. learn concepts, code and apply them on real-world datasets.

First, learn all the math and concepts and then try to actually apply the methods you have learnt

In the long, tedious process of learning, Dipanjan warns that people might lose focus and get sidetracked into thinking why are they even learning a certain method. To remedy this, he insists on learning and applying if one aims of becoming a good data scientist without deviating from the goal.

Addressing the overwhelming hype around AI and ML, Dipanjan says that he is already witnessing the dust settling down and how companies are now actually starting to realise both the limitations and value of AI. Deep learning and deep transfer learning are actually starting to provide value for companies working on complex problems involving unstructured data like images, audio, video and text and things are only going to get bigger and better with advanced tools and hardware in future. However, he admits that there is definitely still a fair bit of hype out there.

Traditional machine learning models like linear and logistic regression will never go out of fashion

No matter how advanced the field gets, he believes that traditional machine learning models like linear and logistic regression will never go out of fashion since they are the bread and butter of various organisations and use-cases out there. And, models that are easy to explain, including linear models and decision trees will continue to be used extensively.

Going forward, he is optimistic about the use-cases and applications to optimise manufacturing, predicting demand and sales, inventory planning, logistics and routing, infrastructure management optimisation and enhancing customer support and experience, will continue to be the key drivers for almost all major organisations for the next decade.

When it comes to breakthroughs, Dipanjan expects something big to happen in newer domains like self-learning, continuous-learning, meta-learning and reinforcement learning.

Always remember to challenge others opinions with a healthy mindset because a good data scientist doesnt just follow instructions blindly.

Talking about his tireless efforts to guide youngsters, he recollects how not having a mentor had been a major hindrance and how he had to unlearn and relearn overtime to correct his misconceptions. To help aspirants avoid the same mistakes, he mentors them whenever possible.

On a concluding note, Dipanjan said that he is mightily impressed by the relentless efforts of the data science community to share ideas through blogs, vlogs and online forums. Confessing his love for Analytics India Magazine, Dipanjan spoke about how AIM has been fostering a rich analytics ecosystem in India by reaching out to the global community.

Dipanjan will be speaking at Analytics India Magazines inaugural virtual conference, Plugin on 28th of May 2020. For more information, check our portal here.

comments

Original post:
The ML Expert Who Hated Mathematics: Interview With Dipanjan Sarkar - Analytics India Magazine

Machine Learning Improves Weather and Climate Models – Eos

Both weather and climate models have improved drastically in recent years, as advances in one field have tended to benefit the other. But there is still significant uncertainty in model outputs that are not quantified accurately. Thats because the processes that drive climate and weather are chaotic, complex, and interconnected in ways that researchers have yet to describe in the complex equations that power numerical models.

Historically, researchers have used approximations called parameterizations to model the relationships underlying small-scale atmospheric processes and their interactions with large-scale atmospheric processes. Stochastic parameterizations have become increasingly common for representing the uncertainty in subgrid-scale processes, and they are capable of producing fairly accurate weather forecasts and climate projections. But its still a mathematically challenging method. Now researchers are turning to machine learning to provide more efficiency to mathematical models.

Here Gagne et al. evaluate the use of a class of machine learning networks known as generative adversarial networks (GANs) with a toy model of the extratropical atmospherea model first presented by Edward Lorenz in 1996 and thus known as the L96 system that has been frequently used as a test bed for stochastic parameterization schemes. The researchers trained 20 GANs, with varied noise magnitudes, and identified a set that outperformed a hand-tuned parameterization in L96. The authors found that the success of the GANs in providing accurate weather forecasts was predictive of their performance in climate simulations: The GANs that provided the most accurate weather forecasts also performed best for climate simulations, but they did not perform as well in offline evaluations.

The study provides one of the first practically relevant evaluations for machine learning for uncertain parameterizations. The authors conclude that GANs are a promising approach for the parameterization of small-scale but uncertain processes in weather and climate models. (Journal of Advances in Modeling Earth Systems (JAMES), https://doi.org/10.1029/2019MS001896, 2020)

Kate Wheeling, Science Writer

See more here:
Machine Learning Improves Weather and Climate Models - Eos

Machine learning: the not-so-secret way of boosting the public sector – ITProPortal

Machine learning is by no means a new phenomenon. It has been used in various forms for decades, but it is very much a technology of the present due to the massive increase in the data upon which it thrives. It has been widely adopted by businesses, reducing the time and improving the value of the insight they can distil from large volumes of customer data.

However, in the public sector there is a different story. Despite being championed by some in government, machine learning has often faced a reaction of concern and confusion. This is not intended as general criticism and in many cases it reflects the greater value that civil servants place on being ethical and fair, than do some commercial sectors.

One fear is that, if the technology is used in place of humans, unfair judgements might not be noticed or costly mistakes in the process might occur. Furthermore, as many decisions being made by government can dramatically affect peoples lives and livelihood then often decisions become highly subjective and discretionary judgment is required. There are also those still scarred by films such as iRobot, but thats a discussion for another time.

Fear of the unknown is human nature, so fear of unfamiliar technology is thus common. But fears are often unfounded and providing an understanding of what the technology does is an essential first step in overcoming this wariness. So for successful digital transformation not only do the civil servants who are considering such technologies need to become comfortable with its use but the general public need to be reassured that the technology is there to assist, not replace, human decisions affecting their future health and well-being.

Theres a strong case to be made for greater adoption of machine learning across a diverse range of activities. The basic premise of machine learning is that a computer can derive a formula from looking at lots of historical data that enables the prediction of certain things the data describes. This formula is often termed an algorithm or a model. We use this algorithm with new data to make decisions for a specific task, or we use the additional insight that the algorithm provides to enrich our understanding and drive better decisions.

For example, machine learning can analyse patients interactions in the healthcare system and highlight which combinations of therapies in what sequence offer the highest success rates for patients; and maybe how this regime is different for different age ranges. When combined with some decisioning logic that incorporates resources (availability, effectiveness, budget, etc.) its possible to use the computers to model how scarce resources could be deployed with maximum efficiency to get the best tailored regime for patients.

When we then automate some of this, machine learning can even identify areas for improvement in real time and far faster than humans and it can do so without bias, ulterior motives or fatigue-driven error. So, rather than being a threat, it should perhaps be viewed as a reinforcement for human effort in creating fairer and more consistent service delivery.

Machine learning is an iterative process; as the machine is exposed to new data and information, it adapts through a continuous feedback loop, which in turn provides continuous improvement. As a result, it produces more reliable results over time and evermore finely tuned and improved decision-making. Ultimately, its a tool for driving better outcomes.

The opportunities for AI to enhance service delivery are many. Another example in healthcare is Computer Vision (another branch of AI), which is being used in cancer screening and diagnosis. Were already at the stage where AI, trained from huge libraries of images of cancerous growths, is better at detecting cancer than human radiologists. This application of AI has numerous examples, such as work being done at Amsterdam UMC to increase the speed and accuracy of tumour evaluations.

But lets not get this picture wrong. Here, the true value is in giving the clinician more accurate insight or a second opinion that informs their diagnosis and, ultimately, the patients final decision regarding treatment. A machine is there to do the legwork, but the human decision to start a programme for cancer treatment, remains with the humans.

Acting with this enhanced insight enables doctors to become more efficient as well as effective. Combining the results of CT scans with advanced genomics using analytics, the technology can assess how patients will respond to certain treatments. This means clinicians avoid the stress, side effects and cost of putting patients through procedures with limited efficacy, while reducing waiting times for those patients whose condition would respond well. Yet, full-scale automation could run the risk of creating a lot more VOMIT.

Victims Of Modern Imaging Technology (VOMIT) is a new phenomenon where a condition such as a malignant tumour is detected by imaging and thus at first glance it would seem wise to remove it. However, medical procedures to remove it carry a morbidity risk which may be greater than the risk the tumour presents during the patients likely lifespan. Here, ignorance could be bliss for the patient and doctors would examine the patient holistically, including mental health, emotional state, family support and many other factors that remain well beyond the grasp of AI to assimilate into an ethical decision.

All decisions like these have a direct impact on peoples health and wellbeing. With cancer, the faster and more accurate these decisions are, the better. However, whenever cost and effectiveness are combined there is an imperative for ethical judgement rather than financial arithmetic.

Healthcare is a rich seam for AI but its application is far wider. For instance, machine learning could also support policymakers in planning housebuilding and social housing allocation initiatives, where they could both reduce the time for the decision but also make it more robust. Using AI in infrastructural departments could allow road surface inspections to be continuously updated via cheap sensors or cameras in all council vehicles (or cloud-sourced in some way). The AI could not only optimise repair work (human or robot) but also potentially identify causes and then determine where strengthened roadways would cost less in whole-life costs versus regular repairs or perhaps a different road layout would reduce wear.

In the US, government researchers are already using machine learning to help officials make quick and informed policy decisions on housing. Using analytics, they analyse the impact of housing programmes on millions of lower-income citizens, drilling down into factors such as quality of life, education, health and employment. This instantly generates insightful, accessible reports for the government officials making the decisions. Now they can enact policy decisions as soon as possible for the benefit of residents.

While some of the fears about AI are fanciful, there is a genuine cause for concern about the ethical deployment of such technology. In our healthcare example, allocation of resources based on gender, sexuality, race or income wouldnt be appropriate unless these specifically had an impact on the prescribed treatment or its potential side-effects. This is self-evident to a human, but a machine would need this to be explicitly defined. Logically, a machine would likely display bias to those groups whose historical data gave better resultant outcomes, thus perpetuating any human equality gap present in the training data.

The recent review by the Committee on Standards in Public Life into AI and its ethical use by government and other public bodies concluded that there are serious deficiencies in regulation relating to the issue, although it stopped short of recommending the establishment of a new regulator.

The review was chaired by crossbench peer Lord Jonathan Evans, who commented:

Explaining AI decisions will be the key to accountability but many have warned of the prevalence of Black Box AI. However our review found that explainable AI is a realistic and attainable goal for the public sector, so long as government and private companies prioritise public standards when designing and building AI systems.

Fears of machine learning replacing all human decision-making need to be debunked as myth: this is not the purpose of the technology. Instead, it must be used to augment human decision-making, unburdening them from the time-consuming job of managing and analysing huge volumes of data. Once its role can be made clear to all those with responsibility for implementing it, machine learning can be applied across the public sector, contributing to life-changing decisions in the process.

Find out more on the use of AI and machine learning in government.

Simon Dennis, Director of AI & Analytics Innovation, SAS UK

Here is the original post:
Machine learning: the not-so-secret way of boosting the public sector - ITProPortal

Machine Learning: Making Sense of Unstructured Data and Automation in Alt Investments – Traders Magazine

The following was written byHarald Collet, CEO at Alkymi andHugues Chabanis, Product Portfolio Manager,Alternative Investments at SimCorp

Institutional investors are buckling under the operational constraint of processing hundreds of data streams from unstructured data sources such as email, PDF documents, and spreadsheets. These data formats bury employees in low-value copy-paste workflows andblockfirms from capturing valuable data. Here, we explore how Machine Learning(ML)paired with a better operational workflow, can enable firms to more quickly extract insights for informed decision-making, and help governthe value of data.

According to McKinsey, the average professional spends 28% of the workday reading and answering an average of 120 emails on top ofthe19% spent on searching and processing data.The issue is even more pronouncedininformation-intensive industries such as financial services,asvaluable employees are also required to spendneedlesshoursevery dayprocessing and synthesizing unstructured data. Transformational change, however,is finally on the horizon. Gartner research estimates thatby 2022, one in five workers engaged in mostly non-routine tasks will rely on artificial intelligence (AI) to do their jobs. And embracing ML will be a necessity for digital transformation demanded both by the market and the changing expectations of the workforce.

For institutional investors that are operating in an environment of ongoing volatility, tighter competition, and economic uncertainty, using ML to transform operations and back-office processes offers a unique opportunity. In fact, institutional investors can capture up to 15-30% efficiency gains by applying ML and intelligent process automation (Boston Consulting Group, 2019)inoperations,which in turn creates operational alpha withimproved customer service and redesigning agile processes front-to-back.

Operationalizingmachine learningworkflows

ML has finally reached the point of maturity where it can deliver on these promises. In fact, AI has flourished for decades, but the deep learning breakthroughs of the last decade has played a major role in the current AI boom. When it comes to understanding and processing unstructured data, deep learning solutions provide much higher levels of potential automation than traditional machine learning or rule-based solutions. Rapid advances in open source ML frameworks and tools including natural language processing (NLP) and computer vision have made ML solutions more widely available for data extraction.

Asset class deep-dive: Machine learning applied toAlternative investments

In a 2019 industry survey conducted byInvestOps, data collection (46%) and efficient processing of unstructured data (41%) were cited as the top two challenges European investment firms faced when supportingAlternatives.

This is no surprise as Alternatives assets present an acute data management challenge and are costly, difficult, and complex to manage, largely due to the unstructured nature ofAlternatives data. This data is typically received by investment managers in the form of email with a variety of PDF documents or Excel templates that require significant operational effort and human understanding to interpret, capture,and utilize. For example, transaction data istypicallyreceived by investment managers as a PDF document via email oran online portal. In order to make use of this mission critical data, the investment firm has to manually retrieve, interpret, and process documents in a multi-level workflow involving 3-5 employees on average.

The exceptionally low straight-through-processing (STP) rates already suffered by investment managers working with alternative investments is a problem that will further deteriorate asAlternatives investments become an increasingly important asset class,predictedbyPrequinto rise to $14 trillion AUM by 2023 from $10 trillion today.

Specific challenges faced by investment managers dealing with manual Alternatives workflows are:

WithintheAlternatives industry, variousattempts have been madeto use templatesorstandardize the exchange ofdata. However,these attempts have so far failed,or are progressing very slowly.

Applying ML to process the unstructured data will enable workflow automation and real-time insights for institutional investment managers today, without needing to wait for a wholesale industry adoption of a standardized document type like the ILPA template.

To date, the lack of straight-through-processing (STP) in Alternatives has either resulted in investment firms putting in significant operational effort to build out an internal data processing function,or reluctantly going down the path of adopting an outsourcing workaround.

However, applyinga digital approach,more specificallyML, to workflows in the front, middle and back office can drive a number of improved outcomes for investment managers, including:

Trust and control are critical when automating critical data processingworkflows.This is achieved witha human-in-the-loopdesign that puts the employee squarely in the drivers seat with features such as confidence scoring thresholds, randomized sampling of the output, and second-line verification of all STP data extractions. Validation rules on every data element can ensure that high quality output data is generated and normalized to a specific data taxonomy, making data immediately available for action. In addition, processing documents with computer vision can allow all extracted data to be traced to the exact source location in the document (such as a footnote in a long quarterly report).

Reverse outsourcing to govern the value of your data

Big data is often considered the new oil or super power, and there are, of course, many third-party service providers standing at the ready, offering to help institutional investors extract and organize the ever-increasing amount of unstructured, big data which is not easily accessible, either because of the format (emails, PDFs, etc.) or location (web traffic, satellite images, etc.). To overcome this, some turn to outsourcing, but while this removes the heavy manual burden of data processing for investment firms, it generates other challenges, including governance and lack of control.

Embracing ML and unleashing its potential

Investment managers should think of ML as an in-house co-pilot that can help its employees in various ways: First, it is fast, documents are processed instantly and when confidence levels are high, processed data only requires minimum review. Second, ML is used as an initial set of eyes, to initiate proper workflows based on documents that have been received. Third, instead of just collecting the minimum data required, ML can collect everything, providing users with options to further gather and reconcile data, that may have been ignored and lost due to a lack of resources. Finally, ML will not forget the format of any historical document from yesterday or 10 years ago safeguarding institutional knowledge that is commonly lost during cyclical employee turnover.

ML has reached the maturity where it can be applied to automate narrow and well-defined cognitive tasks and can help transform how employees workin financial services. However many early adopters have paid a price for focusing too much on the ML technology and not enough on the end-to-end business process and workflow.

The critical gap has been in planning for how to operationalize ML for specific workflows. ML solutions should be designed collaboratively with business owners and target narrow and well-defined use cases that can successfully be put into production.

Alternatives assets are costly, difficult, and complex to manage, largely due to the unstructured nature of Alternatives data. Processing unstructured data with ML is a use case that generates high levels of STP through the automation of manual data extraction and data processing tasks in operations.

Using ML to automatically process unstructured data for institutional investors will generate operational alpha; a level of automation necessary to make data-driven decisions, reduce costs, and become more agile.

The views represented in this commentary are those of its author and do not reflect the opinion of Traders Magazine, Markets Media Group or its staff. Traders Magazine welcomes reader feedback on this column and on all issues relevant to the institutional trading community.

More:
Machine Learning: Making Sense of Unstructured Data and Automation in Alt Investments - Traders Magazine

Google is using machine learning to improve the quality of Duo calls – The Verge

Google has rolled out a new technology to improve audio quality in Duo calls when the service cant maintain a steady connection called WaveNetEQ. Its based on technology from Googles DeepMind division that aims to replace audio jitter with artificial noise that sounds just like human speech, generated using machine learning.

If youve ever made a call over the internet, chances are youve experienced audio jitter. It happens when packets of audio data sent as part of the call get lost along the way or otherwise arrive late or in the wrong order. Google says that 99 percent of Duo calls experience packet loss: 20 percent of these lose over 3 percent of their audio, and 10 percent lose over 8 percent. Thats a lot of audio to replace.

Every calling app has to deal with this packet loss somehow, but Google says that these packet loss concealment (PLC) processes can struggle to fill gaps of 60ms or more without sounding robotic or repetitive. WaveNetEQs solution is based on DeepMinds neural network technology, and it has been trained on data from over 100 speakers in 48 different languages.

Here are a few audio samples from Google comparing WaveNetEQ against NetEQ, a commonly used PLC technology. Heres how it sounds when its trying to replace 60ms of packet loss:

Heres a comparison when a call is experiencing packet loss of 120ms:

Theres a limit to how much audio the system can replace, though. Googles tech is designed to replace short sounds, rather than whole words. So after 120ms, it fades out and produces silence. Google says it evaluated the system to make sure it wasnt introducing any significant new sounds. Plus, all of the processing also needs to happen on-device since Google Duo calls are end-to-end encrypted by default. Once the calls real audio resumes, WaveNetEQ will seamlessly fade back to reality.

Its a neat little bit of technology that should make calls that much bit easier to understand when the internet fails them. The technology is already available for Duo calls made on Pixel 4 phones, thanks to the handsets December feature drop, and Google says its in the process of rolling it out to other unnamed handsets.

Link:
Google is using machine learning to improve the quality of Duo calls - The Verge

Self-supervised learning is the future of AI – The Next Web

Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

See the original post here:
Self-supervised learning is the future of AI - The Next Web

Data Science and Machine-Learning Platforms Market Size Analysis, Top Manufacturers, Shares, Growth Opportunities and Forecast to 2026 – Science In Me

New Jersey, United States: Market Research Intellect has added a new research report titled, Data Science and Machine-Learning Platforms Market Professional Survey Report 2020 to its vast collection of research reports. The Data Science and Machine-Learning Platforms market is expected to grow positively for the next five years 2020-2026.

The Data Science and Machine-Learning Platforms market report studies past factors that helped the market to grow as well as, the ones hampering the market potential. This report also presents facts on historical data from 2011 to 2019 and forecasts until 2026, which makes it a valuable source of information for all the individuals and industries around the world. This report gives relevant market information in readily accessible documents with clearly presented graphs and statistics. This report also includes views of various industry executives, analysts, consultants, and marketing, sales, and product managers.

Key Players Mentioned in the Data Science and Machine-Learning Platforms Market Research Report:

Market Segment as follows:

The global Data Science and Machine-Learning Platforms Market report highly focuses on key industry players to identify the potential growth opportunities, along with the increased marketing activities is projected to accelerate market growth throughout the forecast period. Additionally, the market is expected to grow immensely throughout the forecast period owing to some primary factors fuelling the growth of this global market. Finally, the report provides detailed profile and data information analysis of leading Data Science and Machine-Learning Platforms company.

Data Science and Machine-Learning Platforms Market by Regional Segments:

The chapter on regional segmentation describes the regional aspects of the Data Science and Machine-Learning Platforms market. This chapter explains the regulatory framework that is expected to affect the entire market. It illuminates the political scenario of the market and anticipates its impact on the market for Data Science and Machine-Learning Platforms .

The Data Science and Machine-Learning Platforms Market research presents a study by combining primary as well as secondary research. The report gives insights on the key factors concerned with generating and limiting Data Science and Machine-Learning Platforms market growth. Additionally, the report also studies competitive developments, such as mergers and acquisitions, new partnerships, new contracts, and new product developments in the global Data Science and Machine-Learning Platforms market. The past trends and future prospects included in this report makes it highly comprehensible for the analysis of the market. Moreover, The latest trends, product portfolio, demographics, geographical segmentation, and regulatory framework of the Data Science and Machine-Learning Platforms market have also been included in the study.

Ask For Discount (Special Offer: Get 25% discount on this report) @ https://www.marketresearchintellect.com/ask-for-discount/?rid=192097&utm_source=SI&utm_medium=888

Table of Content

1 Introduction of Data Science and Machine-Learning Platforms Market1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions

2 Executive Summary

3 Research Methodology3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources

4 Data Science and Machine-Learning Platforms Market Outlook4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis

5 Data Science and Machine-Learning Platforms Market, By Deployment Model5.1 Overview

6 Data Science and Machine-Learning Platforms Market, By Solution6.1 Overview

7 Data Science and Machine-Learning Platforms Market, By Vertical7.1 Overview

8 Data Science and Machine-Learning Platforms Market, By Geography8.1 Overview8.2 North America8.2.1 U.S.8.2.2 Canada8.2.3 Mexico8.3 Europe8.3.1 Germany8.3.2 U.K.8.3.3 France8.3.4 Rest of Europe8.4 Asia Pacific8.4.1 China8.4.2 Japan8.4.3 India8.4.4 Rest of Asia Pacific8.5 Rest of the World8.5.1 Latin America8.5.2 Middle East

9 Data Science and Machine-Learning Platforms Market Competitive Landscape9.1 Overview9.2 Company Market Ranking9.3 Key Development Strategies

10 Company Profiles10.1.1 Overview10.1.2 Financial Performance10.1.3 Product Outlook10.1.4 Key Developments

11 Appendix11.1 Related Research

Complete Report is Available @ https://www.marketresearchintellect.com/product/global-data-science-and-machine-learning-platforms-market-size-and-forecast/?utm_source=SI&utm_medium=888

We also offer customization on reports based on specific client requirement:

1-Freecountry level analysis forany 5 countriesof your choice.

2-FreeCompetitive analysis of any market players.

3-Free 40 analyst hoursto cover any other data points

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.

Contact Us:

Mr. Steven FernandesMarket Research IntellectNew Jersey ( USA )Tel: +1-650-781-4080

Email: [emailprotected]

Get Our Trending Report

https://www.marketresearchblogs.com/

https://www.marktforschungsblogs.com/

Tags: Data Science and Machine-Learning Platforms Market Size, Data Science and Machine-Learning Platforms Market Growth, Data Science and Machine-Learning Platforms Market Forecast, Data Science and Machine-Learning Platforms Market Analysis, Data Science and Machine-Learning Platforms Market Trends, Data Science and Machine-Learning Platforms Market

Follow this link:
Data Science and Machine-Learning Platforms Market Size Analysis, Top Manufacturers, Shares, Growth Opportunities and Forecast to 2026 - Science In Me

DMway Analytics Offers Its AUTO-ML Platform Free of Charge to Every Ministry of Health Department and Covid-19 Research Center Globally – AiThority

DMway analytics, leading provider of machine learning automation platforms, announced it was offering its predictive analytics and automated ML platform to every Ministry of Health department globally. In the USAthis includes all State level authorities and elsewhere the equivalent.

The DMway Auto-ML platform is developed by leading Ph.D.s in the field of auto-machine learning and data science, and can transform non-scientists (analysts, BIand data experts) into capable, insightful Data Science Citizens. In practical terms this means that the analysis of Covid-19 data that can currently be carried out by very few people, can now be carried out by many.We are weaponizing Data Science automation to fight back against the virus.

Recommended AI News:Future FinTech Enters Into Equity Acquisition Frame Agreement With Joyrich Enterprises Limited

Machine learning and predictive analytics aregoingto be key in winning the Covid-19 battle. That much is clear. Analyzing data isessential in being able to understand thespreadand treatment effectiveness. The world needs many more people analyzing the data. The insight from global information on the spread of the virus and itsbehaviorwill bekey inminimizing the damage. The ability to empower thousands of citizen data scientists could potentiallyrevolutionizethe speed at which we can react to data as it is made available.

Recommended AI News:ResoluteAI Partners With FinTech Studios to Integrate News Database Into Foundation Research Platform

Gil Nizri, DMway analytics CEO, said:The time is right for technology leaders to donate as much as they can to help the world in confronting this invisible and brutal enemy.We have invested millions of dollars in our tool, but free access at this critical time is essential. Machine learning will be a key tool in dealing with Covid-19. We cannot see the use ofmachinelearningrestricted to a few individuals with access and knowledge of machinelearningtools. We hope the DMway tool will be used to empower thousands of relevantpeopleto analyze Covid-19 related data. It is simple to learn, and we will train people en-masse of up to 100 at atimevia video link from our HQ here inIsrael.Nizri added,Our machinelearningfor Covid-19 course has been especiallycartedthis past two weeks and adjusted to biologists, epidemic analysts and healthcare data experts.Covid-19 is the enemy. Let us fight this together.A global team against thiskiller.

Recommended AI News:Blackboard Ally Integrates BeeLine Reader to Improve Accessibility of Digital Learning Content for All Students

Excerpt from:
DMway Analytics Offers Its AUTO-ML Platform Free of Charge to Every Ministry of Health Department and Covid-19 Research Center Globally - AiThority

Introduction to Machine Learning Course | Udacity

Introduction to Machine Learning Course

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most straightforward ways to quickly gain insights and make predictions.

Machine learning brings together computer science and statistics to harness that predictive power. Its a must-have skill for all aspiring data analysts and data scientists, or anyone else who wants to wrestle all that raw data into refined trends and predictions.

This is a class that will teach you the end-to-end process of investigating data through a machine learning lens. It will teach you how to extract and identify useful features that best represent your data, a few of the most important machine learning algorithms, and how to evaluate the performance of your machine learning algorithms.

This course is also a part of our Data Analyst Nanodegree.

Read more from the original source:
Introduction to Machine Learning Course | Udacity

What is Machine Learning? | Types of Machine Learning …

Machine learning is sub-categorized to three types:

Supervised Learning Train Me!

Unsupervised Learning I am self sufficient in learning

Reinforcement Learning My life My rules! (Hit & Trial)

Supervised Learning is the one, where you can consider the learning is guided by a teacher. We have a dataset which acts as a teacher and its role is to train the model or the machine. Once the model gets trained it can start making a prediction or decision when new data is given to it.

The model learns through observation and finds structures in the data. Once the model is given a dataset, it automatically finds patterns and relationships in the dataset by creating clusters in it. What it cannot do is add labels to the cluster, like it cannot say this a group of apples or mangoes, but it will separate all the apples from mangoes.

Suppose we presented images of apples, bananas and mangoes to the model, so what it does, based on some patterns and relationships it creates clusters and divides the dataset into those clusters. Now if a new data is fed to the model, it adds it to one of the created clusters.

It is the ability of an agent to interact with the environment and find out what is the best outcome. It follows the concept of hit and trial method. The agent is rewarded or penalized with a point for a correct or a wrong answer, and on the basis of the positive reward points gained the model trains itself. And again once trained it gets ready to predict the new data presented to it.

Continue reading here:
What is Machine Learning? | Types of Machine Learning ...

Lecturer/Senior Lecturer in Artificial Intelligence and / or Machine learning job with UNIVERSITY OF BRISTOL | 196709 – Times Higher Education (THE)

Lecturer/Senior Lecturer in Artificial Intelligence and / or Machine learning

Job number ACAD104467Division/School School of Computer Science, Electrical and Electronic Engineering and Engineering MathsContract type Open EndedWorking pattern Full timeSalary 38,017 - 59,135 per annumClosing date for applications 11-Mar-2020

The Department of Computer Science, University of Bristol, is seeking to appoint a number of Lecturers (analogous to Assistant Professor) or Senior Lecturers in Artificial Intelligence and / or Machine learning, the level of appointment depending on the experience of the successful candidate.

You will be expected to both deliver outstanding teaching and undertake internationally leading research as well as carrying out appropriate administrative tasks. There are opportunities to play a significant role in shaping and leading Bristols activities in AI and Data Science, including the new UKRI Centre for Doctoral Training in Interactive AI. Teaching responsibility will cover areas including: data-driven computer science, machine learning and artificial intelligence, for advanced undergraduates as well as postgraduates.

The Department of Computer Science is an international centre of excellence in the foundations and applications of computing, ranked 4th in the UK for research intensity by the 2014 REF. The Department is already home to significant activity in Artificial Intelligence, Machine Learning and Data Science both within the Intelligent System Laboratory research group, as well as closely associated neighbouring research groups in Computer Vision and Robotics. The University of Bristol is a leading institution among the UKs Russell Group Universities and is regularly placed among the top-ranking institutions in global league tables.

We are located in the centre of Bristol, consistently recognised as one of the UK most-liveable cities.

Informal enquires are welcome and can be directed to: Prof. Seth Bullock, Head of the Computer Science department (seth.bullock@bristol.ac.uk), and Prof. Peter Flach, Professor of Artificial Intelligence (peter.flach@bristol.ac.uk).

The posts are being offered on a full-time, open-ended contract. Recruitment supplement scheme available up to 5K.

The closing date for applications is 23:59 on Wednesday 11th March and interviews are expected to take place in week commencing 6th April.

We welcome applications from all members of our community and are particularly encouraging those from diverse groups, such as members of the LGBT+ and BAME communities, to join us.

Read more here:
Lecturer/Senior Lecturer in Artificial Intelligence and / or Machine learning job with UNIVERSITY OF BRISTOL | 196709 - Times Higher Education (THE)

Artificial Intelligence and Machine Learning in the Operating Room – 24/7 Wall St.

Most applications of artificial intelligence (AI) and machine learning technology provide only data to physicians, leaving the doctors to form a judgment on how to proceed. Because AI doesnt actually perform any procedure or prescribe a course of medication, the software that diagnoses health problems does not have to pass a randomized clinical trial as do devices such as insulin pumps or new medications.

A new study published Monday at JAMA Network discusses a trial including 68 patients undergoing elective noncardiac surgery under general anesthesia. The object of the trial was to determine if a predictive early warning system for possible hypotension (low blood pressure) during the surgery might reduce the time-weighted average of hypotension episodes during the surgery.

In other words, not only would the device and its software keep track of the patients mean average blood pressure, but it would sound an alarm if an 85% or greater risk of a patients blood pressure falling below 65 mm of mercury (Hg) was possible in the next 15 minutes. The device also encouraged the anesthesiologist to take preemptive action.

Patients in the control group were connected to the same AI device and software, but only routine pulse and blood pressure data were displayed. That means that the anesthesiologist had no early warning about a hypotension event and could take no action to prevent the event.

Among patients fully connected to the device and software, the median time-weighted average of hypotension was 0.1 mm Hg, compared to an average of 0.44 mm Hg in the control group. In the control group, the median time of hypotension per patient was 32.7 minutes, while it was just 8.0 minutes among the other patients. Most important, perhaps, two patients in the control group died from serious adverse events, while no patients connected to the AI device and software died.

The algorithm used by the device was developed by different researchers who had trained the software on thousands of waveform features to identify a possible hypotension event 15 minutes before it occurs during surgery. The devices used were a Flotrac IQ sensor with the early warning software installed and a HemoSphere monitor. The devices are made by Edwards Lifesciences, and Edwards also had five of eight researchers among the developers of the algorithm. The study itself was conducted in the Netherlands at Amsterdam University Medical Centers.

In an editorial at JAMA Network, associate editor Derek Angus wrote:

The final model predicts the likelihood of future hypotension via measurement of multiple variables characterizing dynamic interactions between left ventricular contractility, preload, and afterload. Although clinicians can look at arterial pulse pressure waveforms and, in combination with other patient features, make educated guesses about the possibility of upcoming episodes of hypotension, the likelihood is high that an AI algorithm could make more accurate predictions.

Among the past decades biggest health news stories were the development of immunotherapies for cancer and a treatment for cystic fibrosis. AI is off to a good start in the new decade.

By Paul Ausick

View original post here:
Artificial Intelligence and Machine Learning in the Operating Room - 24/7 Wall St.

Expert: Don’t overlook security in rush to adopt AI – The Winchester Star

MIDDLETOWN Lord Fairfax Community College hosted technologist Gary McGraw on Wednesday night. He spoke of the cutting edge work being done at the Berryville Institute of Machine Learning, which he co-founded a year ago.

The talk was part of the colleges Tech Bytes series of presentations by industry professionals connected to technology.

The Berryville Institute of Machine Learning is working to educate tech engineers and others about the risks they need to think about while building, adopting and designing machine learning systems. These systems involve computer programs called neural networks that learn to perform a task such as facial recognition by being trained on lots of data, such as by the use of pictures, McGraw said.

Its important that we dont take security for granted or overlook security in the rush to adopt AI everywhere, McGraw said.

One easily relatable adaptation of this technology is in smartphones, which are using AI to analyze conversations, photos and web searches, all to process peoples data, he said.

There should be privacy by default. There is not. They are collecting your data you are the product, he said.

The institute anticipates within a week or two releasing a report titled An Architectural Risk Analysis of Machine Learning Systems in which 78 risks in machine learning systems are identified.

McGraw told the audience that, while not interchangeable terms, artificial intelligence and machine learning have been sold as magic technology that will miraculously solve problems. He said that is wrong. The raw data used in machine learning can be manipulated and it can open up systems to risks, such as system attacks that could compromise information, even confidential information.

McGraw cited a few of those risks.

One risk is someone fooling a machine learning system by presenting malicious input of data that can cause a system to make a false prediction or categorization. Another risk is if an attacker can intentionally manipulate the data being used by a machine learning system, the entire system can be compromised.

One of the most often discussed risks is data confidentiality. McGraw said data protection is already difficult enough without machine learning. In machine learning, there is a unique challenge in protecting data because it is possible that through subtle means information contained in the machine learning model could be extracted.

LFCC Student Myra Diaz, who is studying computer science at the college, attended the program.

I like it. I am curious and so interested to see how can we get a computer to be judgmental in a positive way, such as judging what it is seeing, Diaz said.

Remaining speakers for this years Tech Bytes programs are:

6 p.m. Feb. 19: Kay Connelly, Informatics.

1 p.m. March 11:Retired Secretary of the Navy Richard Danzig

6 p.m. April 8: Heather Wilson, Analytics, L Brands

Read more from the original source:
Expert: Don't overlook security in rush to adopt AI - The Winchester Star

VUniverse Named One of Five Finalists for SXSW Innovation Awards: AI & Machine Learning Category – PRNewswire

NEW YORK, Feb. 5, 2020 /PRNewswire/ -- VUniverse, a personalized movie and show recommendation platform that enables users to browse their streaming services in one appa channel guide for the streaming universe, announced today it's been named one of five finalists in the AI & Machine Learning category for the 23rd annual SXSW Innovation Awards.

The SXSW Innovation Awards recognizes the most exciting tech developments in the connected world. During the showcase on Saturday, March 14, 2020, VUniverse will offer first-look demos of its platform as attendees explore this year's most transformative and forward-thinking digital projects. They'll be invited to experience how VUniverse utilizes AI to cross-reference all streaming services a user subscribes to and then delivers personalized suggestions of what to watch.

"We're honored to be recognized as a finalist for the prestigious SXSW Innovation Awards and look forward to showcasing our technology that helps users navigate the increasingly ever-changing streaming service landscape," said VUniverse co-founder Evelyn Watters-Brady. "With VUniverse, viewers will spend less time searching and more time watching their favorite movies and shows, whether it be a box office hit or an obscure indie gem."

About VUniverse VUniverse is a personalized movie and show recommendation platform that enables users to browse their streaming services in one appa channel guide for the streaming universe. Using artificial intelligence, VUniverse creates a unique taste profile for every user and serves smart lists of curated titles using mood, genre, and user-generated tags, all based on content from the user's existing subscription services. Users can also create custom watchlists and share them with friends and family.

Media Contact Jessica Cheng jessica@relativity.ventures

SOURCE VUniverse

Continued here:
VUniverse Named One of Five Finalists for SXSW Innovation Awards: AI & Machine Learning Category - PRNewswire

The Global Machine Learning Market is expected to grow by USD 11.16 bn during 2020-2024, progressing at a CAGR of 39% during the forecast period -…

NEW YORK, March 30, 2020 /PRNewswire/ --

Global Machine Learning Market 2020-2024 The analyst has been monitoring the global machine learning market and it is poised to grow by USD 11.16 bn during 2020-2024, progressing at a CAGR of 39% during the forecast period. Our reports on global machine learning market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.

Read the full report: https://www.reportlinker.com/p05082022/?utm_source=PRN

The report offers an up-to-date analysis regarding the current global market scenario, latest trends and drivers, and the overall market environment. The market is driven by increasing adoption of cloud-based offerings. In addition, increasing use of machine learning in customer experience management is anticipated to boost the growth of the global machine learning market as well.

Market Segmentation The global machine learning market is segmented as below: End-User: BFSI Retail Telecommunications Healthcare Others

Geographic Segmentation: APAC Europe MEA North America South America

Key Trends for global machine learning market growth This study identifies increasing use of machine learning in customer experience management as the prime reasons driving the global machine learning market growth during the next few years.

Prominent vendors in global machine learning market We provide a detailed analysis of around 25 vendors operating in the global machine learning market 2020-2024, including some of the vendors such as Alibaba Group Holding Ltd., Alphabet Inc., Amazon.com Inc., Cisco Systems Inc., Hewlett Packard Enterprise Development LP, International Business Machines Corp., Microsoft Corp., Salesforce.com Inc., SAP SE and SAS Institute Inc. . The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

Read the full report: https://www.reportlinker.com/p05082022/?utm_source=PRN

About Reportlinker ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________ Contact Clare: clare@reportlinker.com US: (339)-368-6001 Intl: +1 339-368-6001

View original content:http://www.prnewswire.com/news-releases/the-global-machine-learning-market-is-expected-to-grow-by-usd-11-16-bn-during-2020-2024--progressing-at-a-cagr-of-39-during-the-forecast-period-301031621.html

SOURCE Reportlinker

More here:
The Global Machine Learning Market is expected to grow by USD 11.16 bn during 2020-2024, progressing at a CAGR of 39% during the forecast period -...

Red Hat Survey Shows Hybrid Cloud, AI and Machine Learning are the Focus of Enterprises – Computer Business Review

Add to favorites

The data aspect in particular is something that we often see overlooked

Open source enterprise software firm Red Hat now a subsidiary of IBM have conducted its annual survey of its customers which highlights just how prevalent artificial intelligence and machine learning is becoming, while a talent and skill gap is still slowing down companies ability to enact digital transformation plans.

Here are the top three takeaways from Red Hats customer survey;

When asked to best describe their companies approach to cloud infrastructure 31 percent stated that they run a hybrid cloud, while 21 percent said their firm has a private cloud first strategy in place.

The main reason cited for operating a hybrid cloud strategy was the security and cost benefits it provided. Some responders noted that data integration was easier within a hybrid cloud.

Not everyone is fully sure about their approach yet, as 17 percent admitted they are in the process of establishing a cloud strategy, while 12 percent said they have no plans at all to focus on the cloud.

When it comes to digital transformation there has been a notable rise in the amount of firms that undertaken transformation projects. In 2018; under a third of responders (31 percent) said they were implementing new processes and technology, this year that number has nearly doubled as 58 percent confirm they are introducing new technology.

Red Hat notes that: The drivers for these projects vary. And the drivers also vary by the role of the respondent. System administrators care most about simplicity. IT architects focus on user experience and innovation. For managers, simplicity, user experience, and innovation are all tied for top priority. Developers prioritize innovationwhich, overall, was cited as the most important reason to do digital transformation projects.

However, one in ten surveyed said they are facing a talent and skillset gap that is slowing down the pace at which they can transform their business. The skillset is being made worse by the amount of new technologies that are being brought to market such as artificial intelligence, machine learning and containerisation, the use of which is expected to grow significantly in the next 24 months.

Artificial intelligence, machine learning models and processes is the clear emerging technology for firms in 2019, as 30 percent said that they are planning to implement an AI or ML project within the next 12 months.

However, enterprises are worried about the compatibility and complexity of implementing AI or ML, with 29 percent stating they are worried about evolving software stacks.

One in five (22 percent) responders are worried about getting access to the right data. The data aspect in particular is something that we often see overlooked; obtaining relevant data and cleansing or transforming it in ways that its a useful input for models can be one of the most challenging aspects of an AI project, Red Hat notes.

Red Hats survey was created by compiling 876 qualified responses from Red Hat customers during August and September of 2019.

Read the original post:
Red Hat Survey Shows Hybrid Cloud, AI and Machine Learning are the Focus of Enterprises - Computer Business Review

Put Your Money Where Your Strategy Is: Using Machine Learning to Analyze the Pentagon Budget – War on the Rocks

A masterpiece is how then-Deputy Defense Secretary Patrick Shanahan infamously described the Fiscal Year 2020 budget request. It would, he said, align defense spending with the U.S. National Defense Strategy both funding the future capabilities necessary to maintain an advantage over near-peer powers Russia and China, and maintaining readiness for ongoing counter-terror campaigns.

The result was underwhelming. While research and development funding increased in 2020, it did not represent the funding shift toward future capabilities that observers expected. Despite its massive size, the budget was insufficient to address the departments long-term challenges. Key emerging technologies identified by the department such as hypersonic weapons, artificial intelligence, quantum technologies, and directed-energy weapons still lacked a clear and sustained commitment to investment. It was clear that the Department of Defense did not make the difficult tradeoffs necessary to fund long-term modernization. The Congressional Budget Office further estimated that the cost of implementing the plans, which were in any case insufficient to meet the defense strategys requirements, would be about 2 percent higher than department estimates.

Has anything changed this year? The Department of Defense released its FY2021 budget request Feb. 10, outlining the departments spending priorities for the upcoming fiscal year. As is mentioned every year at its release, the proposed budget is an aspirational document the actual budget must be approved by Congress. Nevertheless, it is incredibly useful as a strategic document, in part because all programs are justified in descriptions of varying lengths in what are called budget justification books. After analyzing the 10,000-plus programs in the research, development, testing and evaluation budget justification books using a new machine learning model, it is clear that the newest budgets tepid funding for emerging defense technologies fails to shift the departments strategic direction toward long-range strategic competition with a peer or near-peer adversary.

Regardless of your beliefs about the optimal size of the defense budget or whether the 2018 National Defense Strategys focus on peer and near-peer conflict is justified, the Department of Defenses two most recent budget requests have been insufficient to implement the administrations stated modernization strategy fully.

To be clear, this is not a call to increase the Department of Defenses budget over its already-gargantuan $705.4 billion FY2021 request. Nor is this the only problem with the federal budget proposal, which included cuts to social safety net programs programs that are needed now more than ever to mitigate the effects from COVID-19. Instead, my goal is to demonstrate how the budget fails to fund its intended strategy despite its overall excess. Pentagon officials described the budget as funding an irreversible implementation of the National Defense Strategy, but that is only true in its funding for nuclear capabilities and, to some degree, for hypersonic weapons. Otherwise, it largely neglects emerging technologies.

A Budget for the Last War

The 2018 National Defense Strategy makes clear why emerging technologies are critical to the U.S. militarys long-term modernization and ability to compete with peer or near-peer adversaries. The document notes that advanced computing, big data analytics, artificial intelligence, autonomy, robotics, directed energy, hypersonics, and biotechnology are necessary to ensure we will be able to fight and win the wars of the future. The Government Accountability Office included similar technologies artificial intelligence, quantum information science, autonomous systems, hypersonic weapons, biotechnology, and more in a 2018 report on long-range emerging threats identified by federal agencies.

In the Department of Defenses budget press release, the department argued that despite overall flat funding levels, it made numerous hard choices to ensure that resources are directed toward the Departments highest priorities, particularly in technologies now termed advanced capabilities enablers. These technologies include hypersonic weapons, microelectronics/5G, autonomous systems, and artificial intelligence. Elaine McCusker, the acting undersecretary of defense (comptroller) and chief financial officer, argued, Any place where we have increases, so for hypersonics or AI for cyber, for nuclear, thats where the money went This budget is focused on the high-end fight. (McCuskers nomination for Department of Defense comptroller was withdrawn by the White House in early March because of her concerns over the 2019 suspension of defense funding for Ukraine.) Deputy Defense Secretary David L. Norquist noted that the budget request had the largest research and development request ever.

Despite this, the FY2021 budget is not a significant shift from the FY2020 budget in developing advanced capabilities for competition against a peer or near-peer. I analyzed data from the Army, Navy, Air Force, Missile Defense Agency, Office of the Secretary of Defense, and Defense Advanced Research Projects Agency budget justification books, and the department has still failed to realign its funding priorities toward the long-range emerging technologies that strategic documents suggest should be the highest priority. Aside from hypersonic weapons, which received already-expected funding request increases, most other types of emerging technologies remained mostly stagnant or actually declined from FY2020 request levels.

James Miller and Michael OHanlon argued in their analysis of the FY2020 budget, Desires for a larger force have been tacked onto more crucial matters of military innovation and that the department should instead prioritize quality over quantity. This criticism could be extended to the FY2021 budget, along with the indictment that military innovation itself wasnt fully prioritized either.

Breaking It Down

In this brief review, I attempt to outline funding changes for emerging technologies between the FY2020 and FY2021 budgets based on a machine learning text-classification model, while noting cornerstone programs in each category.

Lets start with the top-level numbers from the R1 document, which divides the budget into seven budget activities. Basic and applied defense research account for 2 percent and 5 percent of the overall FY2021 research and development budget, compared to 38 percent for operational systems development and 27 percent for advanced component development and prototypes. The latter two categories have grown from 2019, in both real terms and as a percentage of the budget, by 2 percent and 5 percent, respectively. These categories were both the largest overall budget activities and also received the largest percentage increases.

Federally funded basic research is critical because it helps develop the capacity for the next generation of applied research. Numerous studies have demonstrated the benefit of federally funded basic science research, with some estimates suggesting two-thirds of the technologies with the most far-reaching impact over the last 50 years [stemmed] from federally funded R&D at national laboratories and research universities. These technologies include the internet, robotics, and foundational subsystems for space-launch vehicles, among others. In fact, a 2019 study for the National Bureau of Economic Researchs working paper series found evidence that publicly funded investments in defense research had a crowding in effect, significantly increasing private-sector research and development from the recipient industry.

Concerns over the levels of basic research funding are not new. A 2015 report by the MIT Committee to Evaluate the Innovation Deficit argued that declining federal basic research could severely undermine long-term U.S. competitiveness, particularly for research areas that lack obvious real-world applications. This is particularly true given that the share of industry-funded basic research has collapsed, with the authors arguing that U.S. companies are left dependent on federally-funded, university-based basic research to fuel innovation. This shift means that federal support of basic research is even more tightly coupled to national economic competitiveness. A 2017 analysis of Americas artificial intelligence strategy recommended that the government [ensure] adequate funding for scientific research, averting the risks of an innovation deficit that could severely undermine long-term competitiveness. Data from the Organization for Economic Cooperation and Development shows that Chinese government research and development spending has already surpassed that of the United States, while Chinese business research and development expenditures are rapidly approaching U.S. levels.

While we may debate the precise levels of basic and applied research and development funding, there is little debate about its ability to produce spillover benefits for the rest of the economy and the public at large. In that sense, the slight declines in basic and applied research funding in both real terms and as a percentage of overall research and development funding hurt the United States in its long-term competition with other major powers.

Clean, Code, Classify

The Defense Departments budget justification books contain thousands of pages of descriptions spread across more than 20 separate PDFs. Each program description explains the progress made each year and justifies the funding request increase or decrease. There is a wealth of information about Department of Defense strategy in these documents, but it is difficult to assess departmental claims about funding for specific technologies or to analyze multiyear trends while the data is in PDF form.

To understand how funding changed for each type of emerging technology, I scraped and cleaned this information from the budget documents, then classified each research and development program into categories of emerging technologies (including artificial intelligence, biotechnologies, directed-energy weapons, hypersonic weapons and vehicles, quantum technologies, autonomous and swarming systems, microelectronics/5G, and non-emerging technology programs). I designed a random forest machine learning model to sort the remaining programs into these categories. This is an algorithm that uses hundreds of decision trees to identify which variables or words in a program description, in this case are most important for classifying data into groups.

There are many kinds of machine learning models that can be used to classify data. To choose one that would most effectively classify the program data, I started by hand-coding 1,200 programs to train three different kinds of models (random forest, k-nearest neighbors, and support vector machine), as well as for a model testing dataset. Each model would look at the term frequency-inverse document frequency (essentially, how often given words appear adjusted for how rarely they are used) of all the words in a programs description to decide how to classify each program. For example, for the Armys Long Range Hypersonic Weapon program, the model might have seen the words hypersonic, glide, and thermal in the description and guessed that it was most likely a hypersonic program. The random forest model slightly outperformed the support vector machine model and significantly outperformed the k-nearest neighbors model, as well as a simpler method that just looked for specific keywords in a program description.

Having chosen a machine-learning model to use, I set it to work classifying the remaining 10,000 programs. The final result is a large dataset of programs mentioned in the 2020 and 2021 research and development budgets, including their full descriptions, predicted category, and funding amount for the year of interest. This effort, however, should be viewed as only a rough estimate of how much money each emerging technology is getting. Even a fully hand-coded classification that didnt rely on a machine learning model would be challenged by sometimes-vague program descriptions and programs that fund multiple types of emerging technologies. For example, the Applied Research for the Advancement of S&T Priorities program funds projects across multiple categories, including electronic warfare, human systems, autonomy, and cyber advanced materials, biomedical, weapons, quantum, and command, control, communications, computers and intelligence. The model took a guess that the program was focused on quantum technologies, but that is clearly a difficult program to classify into a single category.

With the programs sorted and classified by the model, the variation in funding between types of emerging technologies became clear.

Hypersonic Boost-Glide Weapons Win Big

Both the official Department of Defense budget press release and the press briefing singled out hypersonic research and development investment. As one of the departments advanced capabilities enablers, hypersonic weapons, defenses, and related research received $3.2 billion in the FY2021 budget, which is nearly as much as the other three priorities mentioned in the press release combined (microelectronics/5G, autonomy, and artificial intelligence).

In the 2021 budget documents, there were 96 programs (compared with 60 in the 2020 budget) that the model classified as related to hypersonics based on their program descriptions, combining for $3.36 billion an increase from 2020s $2.72 billion. This increase was almost solely due to increases in three specific programs, and funding for air-breathing hypersonic weapons and combined-cycle engine developments was stagnant.

The three programs driving up the hypersonic budget are the Armys Long-Range Hypersonic Weapon, the Navys Conventional Prompt Strike, and the Air Forces Air-Launched Rapid Response Weapon program. The Long-Range Hypersonic Weapon received a $620.42 million funding increase to field an experimental prototype with residual combat capability. The Air-Launched Rapid Response Weapons $180.66 million increase was made possible by the removal of funding for the Air Forces Hypersonic Conventional Strike Weapon in FY2021 which saved $290 million compared with FY2020. This was an interesting decision worthy of further analysis, as the two competing programs seemed to differ in their ambition and technical risk; the Air-Launched Rapid Response Weapon program was designed for pushing the art-of-the-possible while the conventional strike weapon was focused on integrating already mature technologies. Conventional Prompt Strike received the largest 2021 funding request at $1 billion, an increase of $415.26 million over the 2020 request. Similar to the Army program, the Navys Conventional Prompt Strike increase was fueled by procurement of the Common Hypersonic Glide Body that the two programs share (along with a Navy-designed 34.5-inch booster), as well as testing and integration on guided missile submarines.

To be sure, the increase in hypersonic funding in the 2021 budget request is important for long-range modernization. However, some of the increases were already planned, and the current funding increase largely neglects air-breathing hypersonic weapons. For example, the Navys Conventional Prompt Strike 2021 budget request was just $20,000 more than anticipated in the 2020 budget. Programs that explicitly mention scramjet research declined from $156.2 million to $139.9 million.

In contrast to hypersonics, research and development funding for many other emerging technologies was stagnant or declined in the 2021 budget. Non-hypersonic emerging technologies increased from $7.89 billion in 2020 to only $7.97 billion in 2021, mostly due to increases in artificial intelligence-related programs.

Biotechnology, Quantum, Lasers Require Increased Funding

Source: Graphic by the author.

Directed-energy weapons funding fell slightly in the 2021 budget to $1.66 billion, from $1.74 billion in 2020. Notably, the Army is procuring three directed-energy prototypes to support the maneuver-short range air defense mission for $246 million. Several other programs are also noteworthy. The High Energy Power Scaling program ($105.41 million) will finalize designs and integrate systems into a prototype 300 kW-class high-energy laser, focusing on managing thermal blooming (a distortion caused by the laser heating the atmosphere through which it travels) for 300 and eventually 500 kW-class lasers. Second, the Air Forces Directed Energy/Electronic Combat program ($89.03 million) tests air-based directed-energy weapons for use in contested environments.

Quantum technologies funding increased by $109 million, to $367 million, in 2021. In general, quantum-related programs are more exploratory, focused on basic and applied research rather than fielding prototypes. They are also typically funded by the Office of the Secretary of Defense or the Defense Advanced Research Projects Agency rather than by the individual services, or they are bundled into larger programs that distribute funding to many emerging technologies. For example, several of the top 2021 programs that the model classified as quantum research and development based on their descriptions include the Office of the Secretary of Defenses Applied Research for the Advancement of S&T Priorities ($54.52 million), or the Defense Advanced Research Projects Agencys Functional Materials and Devices ($28.25 million). The increase in Department of Defense funding for quantum technologies is laudable, but given the potential disruptive ability of quantum technologies, the United States should further increase its federal funding for quantum research and development, guarantee stable long-term funding, and incentivize young researchers to enter the field. The FY2021 budgets funding increase is clearly a positive step, but quantum technologies revolutionary potential demands more funding than the category currently receives.

Biotechnologies increased from $969 million in 2020 to $1.05 billion in 2021 (my guess is that the model overestimated the funding for emerging biotech programs, by including research programs related to soldier health and medicine that involve established technologies). Analyses of defense biotechnology typically focus on the defense applications of human performance enhancement, synthetic biology, and gene-editing technology research. Previous analyses, including one from 2018 in War on the Rocks, have lamented the lack of a comprehensive strategy for biotechnology innovation, as well as funding uncertainties. The Center for Strategic and International Studies argued, Biotechnology remains an area of investment with respect to countering weapons of mass destruction but otherwise does not seem to be a significant priority in the defense budget. These concerns appear to have been well-founded. Funding has stagnated despite the enormous potential offered by biotechnologies like nanotubes, spider silk, engineered probiotics, and bio-based sensors, many of which could be critical enablers as components of other emerging technologies. For example, this estimate includes the interesting Persistent Aquatic Living Sensors program ($25.7 million) that attempts to use living organisms to detect submarines and unmanned underwater vehicles in littoral waters.

Programs classified as autonomous or swarming research and development declined from $3.5 billion to $2.8 billion in 2021. This includes the Army Robotic Combat Vehicle program (stagnant at $86.22 million from $89.18 million in 2020). The Skyborg autonomous attritable (a low-cost, unmanned system that doesnt have to be recovered after launch) drone program requested $40.9 million and also falls into the autonomy category, as do the Air Forces Golden Horde ($72.09 million), Office of the Secretary of Defenses manned-unmanned teaming Avatar program ($71.4 million), and the Navys Low-Cost UAV Swarming Technology (LOCUST) program ($34.79 million).

The programs sorted by the model into the artificial intelligence category increased from $1.36 billion to $1.98 billion in 2021. This increase is driven by an admirable proliferation of smaller programs 161 programs under $50 million, compared with 119 in 2020. However, as the Department of Defense reported that artificial intelligence research and development received only $841 million in the 2021 budget request, it is clear that the random forest model is picking up some false positives for artificial intelligence funding.

Some critics argue that federal funding risks duplicating artificial intelligence efforts in the commercial sector. There are several problems with this argument, however. A 2017 report on U.S. artificial intelligence strategy argued, There also tends to be shortfalls in the funding available to research and start-ups for which the potential for commercialization is limited or unlikely to be lucrative in the foreseeable future. Second, there are a number of technological, process, personnel, and cultural challenges in the transition of artificial intelligence technologies from commercial development to defense applications. Finally, the Trump administrations anti-immigration policies hamstring U.S. technological and industrial base development, particularly in artificial intelligence, as immigrants are responsible for one-quarter of startups in the United States.

The Neglected Long Term

While there are individual examples of important programs that advance the U.S. militarys long-term competitiveness, particularly for hypersonic weapons, the overall 2021 budget fails to shift its research and development funding toward emerging technologies and basic research.

While recognizing that the overall budget was essentially flat, it should not come as a surprise that research and development funding for emerging technologies was mostly flat as well. But the United States already spends far more on defense than any other country, and even with a flat budget, the allocation of funding for emerging technologies does not reflect an increased focus on long-term planning for high-end competition compared with the 2020 budget. Specifically, the United States should increase its funding for emerging technologies other than hypersonics directed energy, biotech, and quantum information sciences, as well as in basic scientific research even if it requires tradeoffs in other areas.

The problem isnt necessarily the year-to-year changes between the FY2020 and FY2021 budgets. Instead, the problem is that proposed FY2021 funding for emerging technologies continues the previous years underwhelming support for research and development relative to the Department of Defenses strategic goals. This is the critical point for my assessment of the budget: despite multiple opportunities to align funding with strategy, emerging technologies and basic research have not received the scale of investment that the National Defense Strategy argues they deserve.

Chad Peltier is a senior defense analyst at Janes, where he specializes in emerging defense technologies, Chinese military modernization, and data science. This article does not reflect the views of his employer.

Image: U.S. Army (Photo by Monica K. Guthrie)

Go here to read the rest:
Put Your Money Where Your Strategy Is: Using Machine Learning to Analyze the Pentagon Budget - War on the Rocks

What Is The Difference Between Artificial Intelligence And …

Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably.

They are not quite the same thing, but the perception that they are can sometimes lead to some confusion. So I thought it would be worth writing a piece to explain the difference.

Both terms crop up very frequently when the topic is Big Data, analytics, and the broader waves of technological change which are sweeping through our world.

In short, the best answer is that:

Artificial Intelligence is the broader concept of machines being able to carry out tasks in a way that we would consider smart.

And,

Machine Learning is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.

Early Days

Artificial Intelligence has been around for a long time the Greek myths contain stories of mechanical men designed to mimic our own behavior. Very early European computers were conceived as logical machines and by reproducing capabilities such as basic arithmetic and memory, engineers saw their job, fundamentally, as attempting to create mechanical brains.

As technology, and, importantly, our understanding of how our minds work, has progressed, our concept of what constitutes AI has changed. Rather than increasingly complex calculations, work in the field of AI concentrated on mimicking human decision making processes and carrying out tasks in ever more human ways.

Artificial Intelligences devices designed to act intelligently are often classified into one of two fundamental groups applied or general. Applied AI is far more common systems designed to intelligently trade stocks and shares, or maneuver an autonomous vehicle would fall into this category.

Neural Networks - Artificial Intelligence And Machine Learning (Source: Shutterstock)

Generalized AIs systems or devices which can in theory handle any task are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI, its really more accurate to think of it as the current state-of-the-art.

The Rise of Machine Learning

Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving AI development forward with the speed it currently has.

One of these was the realization credited to Arthur Samuel in 1959 that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves.

The second, more recently, was the emergence of the internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.

Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world.

Neural Networks

The development of neural networks has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias.

A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain.

Essentially it works on a system of probability based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables learning by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.

Machine Learning applications can read text and work out whether the person who wrote it is making a complaint or offering congratulations. They can also listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music expressing the same themes, or which they know is likely to be appreciated by the admirers of the original piece.

These are all possibilities offered by systems based around ML and neural networks. Thanks in no small part to science fiction, the idea has also emerged that we should be able to communicate and interact with electronic devices and digital information, as naturally as we would with another human being. To this end, another field of AI Natural Language Processing (NLP) has become a source of hugely exciting innovation in recent years, and one which is heavily reliant on ML.

NLP applications attempt to understand natural human communication, either written or spoken, and communicate in return with us using similar, natural language. ML is used here to help machines understand the vast nuances in human language, and to learn to respond in a way that a particular audience is likely to comprehend.

A Case Of Branding?

Artificial Intelligence and in particular today ML certainly has a lot to offer. With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare and manufacturing are reaping the benefits. So, its important to bear in mind that AI and ML are something else they are products which are being sold consistently, and lucratively.

Machine Learning has certainly been seized as an opportunity by marketers. After AI has been around for so long, its possible that it started to be seen as something thats in some way old hat even before its potential has ever truly been achieved. There have been a few false starts along the road to the AI revolution, and the term Machine Learning certainly gives marketers something new, shiny and, importantly, firmly grounded in the here-and-now, to offer.

The fact that we will eventually develop human-like AI has often been treated as something of an inevitability by technologists. Certainly, today we are closer than ever and we are moving towards that goal with increasing speed. Much of the exciting progress that we have seen in recent years is thanks to the fundamental changes in how we envisage AI working, which have been brought about by ML. I hope this piece has helped a few people understand the distinction between AI and ML. In another piece on this subject I go deeper literally as I explain the theories behind another trending buzzword Deep Learning.

Check out these links for more information on artificial intelligence and many practical AI case examples.

Read the rest here:
What Is The Difference Between Artificial Intelligence And ...

What is Machine Learning? A definition – Expert System

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

Machine learning algorithms are often categorized as supervised or unsupervised.

Machine learning enables analysis of massive quantities of data. While it generally delivers faster, more accurate results in order to identify profitable opportunities or dangerous risks, it may also require additional time and resources to train it properly. Combining machine learning with AI and cognitive technologies can make it even more effective in processing large volumes of information.

Go here to see the original:
What is Machine Learning? A definition - Expert System