QA Increasingly Benefits from AI and Machine Learning – RTInsights

While the human element will still exist, incorporating AI/ML will improve the QA testing within an organization.

The needle in quality assurance (QA) testing is moving in the direction of increased use of artificial intelligence (AI) and machine learning (ML). However, the integration of AI/ML in the testing process is not across the board. The adoption of advanced technologies still tends to be skewed towards large companies.

Some companies have held back, waiting to see if AI met the initial hype as being a disruptor in various industries. However, the growing consensus is that the use of AI benefits the organizations that have implemented it and improves efficiencies.

Small- andmid-sized could benefit from testing software using AI/ML to meet some of thechallenges faced by QA teams. While AI and ML are not substitutes for humantesting, they can be a supplement to the testing methodology.

See also: Real-time Applications and Business Transformation

As development is completed and moves to the testing stage of the system development life cycle, QA teams must prove that end-users can use the application as intended and without issue. Part of end-to-end (E2E) testing includes identifying the following:

E2E testingplans should incorporate all of these to improve deployment success. Even whilefacing time constraints and ever-changing requirements, testing cycles areincreasingly quick and short. Yet, they still demand high quality in order tomeet end-user needs.

Lets look at some of the specific ways AI and ML can streamline the testing process while also making it more robust.

AI in softwaretesting reduces the time spent on manually testing. Teams are then able toapply their efforts to more complex tasks that require human interpretation.

Developers andQA staff will need to apply less effort in designing, prioritizing, writing,and maintaining E2E tests. This will expedite timelines for delivery and freeup resources to work on developing new products rather than testing a newrelease.

With more rapiddeployment, there is an increased need for regression testing, to the pointwhere humans cannot realistically keep up. Companies can use AI for some of themore tedious regression testing tasks, where ML can be used to generate testscripts.

In the exampleof a UI change, AI/ML can be used to scan for color, shape, size, or overlap.Where these would otherwise be manual tests, AI can be used for validation ofthe changes that a QA tester may miss.

Whenintroducing a change, how many tests are needed to pass QA and validate thatthere are no issues? Leveraging ML can determine how many tests to run based oncode changes and the outcomes of past changes and tests.

ML can alsoselect the appropriate tests to run by identifying the particular subset ofscenarios affected and the likelihood of failure. This creates more targetedtesting.

With changesthat may impact a large number of fields, AI/ML automate the validation ofthese fields. For example, a scenario might be Every field that is apercentage should display two decimals. Rather than manually checkingeach field, this can be automated.

ML can adapt tominor code changes so that the code can self-correct or self-healover time. This is something that could otherwise take hours for a human to fixand re-test.

While QAtesters are good at finding and addressing complex problems and proving outtest scenarios, they are still human. Errors can occur in testing, especiallyfrom burnout syndrome of completing tedious processing. AI is not affected bythe number of repeat tests and therefore yields more accurate and reliableresults.

Softwaredevelopment teams are also ultimately composed of people, and thereforepersonalities. Friction can occur between developers and QA analysts, particularlyunder time constraints or the outcomes found during testing. AI/ML can removethose human interactions that may cause holdups in the testing process byproviding objective results.

Often when afailure occurs during testing, the QA tester or developer will need todetermine the root cause. This can include parsing out the code to determinethe exact point of failure and resolving it from there.

In place ofgoing through thousands of lines of codes, AI will be able to sort through thelog files, scan the codes, and detect errors within seconds. This saves hoursof time and allows the developer to dive into the specific part of the code tofix the problem.

While the humanelement will still exist, introducing testing software that incorporates AI/MLwill overall improve the QA testing within an organization. Equally asimportant as knowing when to use AI and ML is knowing when not to use it.Specific scenario testing or applying human logic in a scenario to verify theoutcome are not well suited for AI and ML.

But forunderstanding user behavior, gathering data analytics will build theappropriate test cases. This information identifies the failures that are mostlikely to occur, which makes for better testing models.

AI/ML can also specify patterns over time, build test environments, and stabilize test scripts. All of these allow the organization to spend more time developing new product and less time testing.

More:
QA Increasingly Benefits from AI and Machine Learning - RTInsights

Injecting Machine Learning And Bayesian Optimization Into HPC – The Next Platform

No matter what kind of traditional HPC simulation and modeling system you have, no matter what kind of fancy new machine learning AI system you have, IBM has an appliance that it wants to sell you to help make these systems work better and work better together if you are mixing HPC and AI.

It is called the Bayesian Optimization Accelerator, and it is a homegrown statistical analytics stack that runs on one or more of Big Blues Witherspoon Power AC922 hybrid CPU-GPU supercomputer nodes the ones that are used in the Summit supercomputer at Oak Ridge National Laboratories and the Sierra supercomputer used at Lawrence Livermore National Laboratory.

IBM has been touting the ideas behind the BOA system for more than two years now, and it is finally being commercialized after some initial testing in specific domains that illustrate the principles that can be modified and applied to all kinds of simulation and modeling workloads. Dave Turek, now retired from IBM but the longtime executive steering the companys HPC efforts, walked us through the theory behind the BOA software stack, which presumably came out of IBM Research, way back at SC18 two years ago. As far as we can tell, this is still the best English language description of what BOA does and how it does it. Turek gave us an update on BOA at our HPC Day event ahead of SC19 last year, focusing specifically on how Bayesian statistical principles can be applied to ensembles of simulations in classical HPC applications to do better work and get to results faster.

In the HPC world, we tend to try to throw more hardware at the problem and then figure out how to scale up frameworks to share memory and scale out applications across the more capacious hardware, but this is different. With BOA, the ideas can be applied to any HPC system, regardless of vendor or architecture. This is not only transformational for IBM in that it feels more like a service encapsulated in an appliance and will have an annuity-like revenue stream across many thousands of potential HPC installations. It is also important for IBM in that the next generation exascale machines in the United States, where IBM won the big deals for Summit and Sierra, are not based on the combination of IBM Power processors, Nvidia GPU accelerators, and Mellanox InfiniBand interconnects. The follow-on Frontier and El Capitan systems at these labs are rather using AMD CPU and GPU compute engines and a mix of Infinity Fabric for in-node connectivity and Cray Slingshot Ethernet (now part of Hewlett Packard Enterprise) for lashing nodes together. Even these machines might benefit from BOA, which gives Big Blue some play across the HPC spectrum, much as its Spectrum Scale (formerly GPFS) parallel file system is often used in systems where IBM is not the primary contractor. BOA is even more open in this sense, although like GPFS, the underlying software stack used in the BOA appliance is not open source anymore than GPFS is. This is very unlikely to change, even with IBM acquiring Red Hat last year and becoming the largest vendor of support contracts for tested and integrated open source software stacks in the world.

So what is this thing that IBM is selling? As the name suggests, it is based on Bayesian optimization, a field of mathematics that was created by Jonas Mockus in the 1970s and that has been applied to all kinds of algorithms including various kinds of reinforcement learning systems in the artificial intelligence field. But it is important to note that Bayesian optimization does not itself involve machine learning based on neural networks, but what IBM is in fact doing is using Bayesian optimization and machine learning together to drive ensembles of HPC simulations and models. This is the clever bit.

With Bayesian optimization, you know there is a function in the world and it is in a black box (mathematically speaking, not literally). You have a set of inputs and you see how it behaves through its outputs. The optimization part is to build a database of inputs and outputs and to statistically infer something about what is going on between the two, and then create a mathematical guess about what a better set of inputs might be to get a desired output. The trick is to use machine learning training to watch what a database of inputs yields for outputs, and you use the results of that to infer what the next set of inputs should be. In the case of HPC simulations, this means you can figure out what should be simulated instead of trying to simulate all possible scenarios or at least a very large number of them. BOA doesnt change the simulation code one bit and that is important. It just is given a sense of the desired goal of the simulation thats the tricky part that requires the domain expertise that IBM Research can supply and watches the inputs and outputs of simulations and offers suggested inputs.

The net effect of BOA is that, over time, you need less computing to run an HPC ensemble, and you also can converge to the answer is less time as well. Or, more of that computing can be dedicated to driving larger or more fine-grained simulations because the number of runs in an ensemble is a lot lower. We all know that time is fluid money and that hardware is also frozen money depreciated one little trickle at a time through use, and add them together and there is a lot of money that can potentially be saved.

Chris Porter, offering manager for HPC cloud for Power Systems at IBM, walked us through how BOA is being commercialized and some of the data from the early use cases where BOA was deployed.

One of the early use cases was at the Texas Advanced Computing Center at the University of Texas at Austin, where Mary Wheeler, a world-renowned expert in numerical methods for partial differential equations as they apply to oil and gas reservoir models, used the BOA appliance in some simulations. To be specific, Wheelers reservoir model is called the Integrated Parallel Accurate Reservoir Simulator, or IPARS, and it has gradient descent/ascent model built within it. Using their standard technique for maximizing the oil extraction from a reservoir with the model, it would take on the order of 200 evaluations of the model to get what Porter characterized as a good result. But by injecting BOA into the flow of simulations, they could get the same result with only 73 evaluations. That is a 63.5 percent reduction in the number of evaluations performed.

IBMs own Power10 design team also used BOA in its electronic design automation (EDA) workflow, specifically to check the signal integrity of the design. To do so using the raw EDA software took over 5,600 simulations, and IBM did all of that work as it normally would do. But then IBM added BOA to the stack and redid all of the work, and go to the same level of accuracy in analyzing the signal integrity of the Power10 chips traces with only 140 simulations. That is a 97.5 percent reduction in computing needed or a factor of 40X speedup if you want to look at it that way. (Porter warns that not all simulations will see this kind of huge bump.)

In a third use case, a petroleum company that creates industrial lubricants, whom Porter could not name, was creating a lubricant that had three components. There are myriad different proportions to mix them in to get a desired viscosity and slipperiness, and the important factor is that one of these components was very expensive and the other two were not. Maximizing the performance of the lubricant while minimizing the amount of the expensive item was the task in this case, and this company ran the simulation without and then with the BOA appliance plugged in. Heres the fun bit: BOA found a totally unusual configuration that this companys scientists would have never thought of and was able to find the right mix with four orders of magnitude more certainty than prior ensemble simulations and did one-third as many simulations to get to the result.

These are dramatic speedups, and demonstrate the principle that changing algorithms and methods is as important as changing hardware to run older algorithms and methods.

IBM is being a bit secretive about what is in the BOA software stack, but it is using PyTorch and TensorFlow for machine learning frameworks in different stages and GP Pro for sparse Gaussian process analysis, all of which have been tuned to run across the IBM Power9 and Nvidia V100 GPU accelerators in a hybrid (and memory coherent) fashion. The BOA stack could, in theory, run on any system with any CPU and any GPU, but it really is tuned up for the Power AC922 hardware.

At the moment, IBM is selling two different configurations of the BOA appliance. One has two V100 GPU accelerators, each with 16 GB of HBM2 memory, and two Power9 processors with a total of 40 cores running at a base 2 GHz and a turbo boost 2.87 GHz and 256 GB of their own DDR4 memory. The second BOA hardware configuration has a pair of Power9 chips with a total of 44 cores running at a base 1.9 GHz and a turbo boost to 3.1 GHz with its own 1 TB of memory, plus four of the V100 GPU accelerators with 16 GB of HBM2 memory each.

IBM is not providing pricing for these two machines, or the BOA stack on top of it, but Porter says that it is sold under an annual subscription that runs to hundreds of thousands of dollars per server per year. That may sound like a lot, but considering the cost of an HPC cluster, which runs from millions of dollars to hundreds of millions of dollars, this is a small percentage of the overall cost and can help boost the effective performance of the machine by an order of magnitude or more.

The BOA appliance became available on November 27. Initial target customers are in molecular modeling, aerospace and auto manufacturing, drug discovery, and oil and gas reservoir modeling and a bit of seismic processing, too.

Follow this link:
Injecting Machine Learning And Bayesian Optimization Into HPC - The Next Platform

Geek of the Week: Vulcans W. Andre Perkins uses machine learning to help predict climate change – GeekWire

W. Andre Perkins. (Photo courtesy of W. Andre Perkins)

Growing up outside a town in Wisconsin, out in the woods with dial-up internet and his parents, sister, cats, horses, and chickens, W. Andre Perkins always had an interest in weather. Sometime along the way, it evolved into a desire to pursue a career in climate science.

It built up over the years through many interactions, Perkins said. The data and modeling aspect appealed to my interests, but year by year I was also faced with growing concern about our future projections and relative inaction. That last aspect really played into my feelings that were going to need plenty of folks who are well-versed in climate moving forward.

Perkins, GeekWires latest Geek of the Week, is now among the well versed, as a machine learning scientist for Vulcan Inc.s Climate Modeling team. He started the job with the late Paul Allens company just over a year ago, after receiving his PhD in Atmospheric Sciences at the University of Washington. Prior to that he studied at the University of Wisconsin majoring in Atmospheric & Oceanic Science and Computer Sciences.

During his time in graduate school, Perkins was a National Science Foundation Graduate Research Fellowship recipient for his work on adapting some of the best weather simulation techniques to make better estimates of the climate over the last 2,000 years. At Vulcan he is developing cutting-edge solutions using machine learning to decrease uncertainty about future climate.

With better information on how patterns of precipitation will change over time, we can empower governments at all levels to make plans to protect their constituents, Perkins said, adding that the team and project at Vulcan was a huge draw.

It was a once-in-a-lifetime opportunity to focus on a climate-related problem with a crew of wildly intelligent people, he said.

Perkins said optimism is a bit hard to come by these days when it comes to where were headed as a species on a warming planet. A lot still depends on humans and how well we can get emissions under control.

Short-to-medium term, well really start to experience some of the pain weve baked in for ourselves, Perkins said. This is exactly the reason why we need to build up our capacity for centering and protecting people, especially our most vulnerable, from the impacts of the changing climate.

But hes hopeful in the long term that tools and tech can help people overcome the seemingly insurmountable problems but it will take radical shifts in our priorities as a society.

This problem is all hands on deck one that can only be addressed through collective action. I am hopeful because public sentiment seems to be turning in favor of addressing climate. I am hopeful from all the young and brilliant leaders that are starting to work their way into public discourse. In the meantime, Ill continue the work alongside many others in hope of a positive ending to the climate problem.

When hes not dealing with such major issues, Perkins likes to geek outwatching anime or whatevers hot in popular TV, talking about politics, or occasionally reading sci-fi/fantasy novels. He does his fair share of gaming and has been building PCs since high school (thanks in part to scavenging and eBay early on).

Pre-quarantine, I enjoyed circuit training, weightlifting, and getting some good dance sessions in at trance/progressive shows in downtown Seattle, he said. But lately, Ive been learning a lot about home networking for entertainment and security, dog training, and building a good relationship with our new 3-year-old pooch, Tachikoma.

Learn more about this weeks Geek of the Week, W. Andre Perkins:

What do you do, and why do you do it? I figure out how to use machine learning (ML) to improve our climate simulations. We are 30 years behind on the biggest issue of our generation and how we focus our finite resources toward this problem will matter. This is my way of contributing. If we can develop methods to decrease our uncertainty about how to plan for people and the environment by even minute amounts it could have a huge impact.

Whats the single most important thing people should know about your field? Climate scientists bring together disciplines from many fields from math, physics, biology, chemistry, etc. There are many of us, approaching this problem from many different angles. Were all human. Were doing our best to provide relevant and timely information to help avert absolute disaster.

Where do you find your inspiration? In my many friends and colleagues who pour their energy into creating a better future for everyone. Some are great activists/organizers, mentors/teachers, public science communicators, or brilliant scientists. They are willing to put themselves forward in one way or another. Theyve cemented for me the notion that we all have some role to play, big or small, whether out in front or behind the scenes, towards handling climate change.

Whats the one piece of technology you couldnt live without, and why? Its a bit of a cheat since theres a lot of associated pieces to this technology, but definitely my desktop PC. Its a one-stop-shop for work/entertainment/learning/connecting with friends, which has been especially important while weve been incentivized to stay at home.

Whats your workspace like, and why does it work for you? Its nothing too fancy a desk, a computer, and a comfy chair for long sitting sessions. We actually do most of our work using cloud services to handle the volume of data and computational requirements for ML and climate simulation, which makes working from varied locations (pre-quarantine) quite easy. My office is maybe a bit empty for most tastes, but Im slowly getting some art on my white walls. There are usually some lingering gadgets or notes from the weekends projects.Im not too picky with my workspace, but having a separate room that I can have options of quiet for thinking or some music or podcasts for development work helps with the daily flow. Before moving, we turned our 1BR apartments living room into a shared office last spring when everything shifted, so Im very grateful to have the space right now.

Your best tip or trick for managing everyday work and life. (Help us out, we need it.) Its an ongoing personal project to figure this out, and Im still trying to break some of my bad habits from graduate school. Whats worked for me so far is setting plans (like an exercise session or outdoor walk), which is great for providing a strict cutoff. If Im going to work on something outside work hours, I only do it if it interests me personally. A lot of interesting engineering problems are like puzzles so I dont fret too much if I decide to think or work on it after hours.

Mac, Windows or Linux? Windows on my home computer for my entire life (near necessity for most games) and I usually connect to Linux workstations or VMs for research/development work. Ive just recently been exploring the Windows Subsystem for Linux since I can utilize my GPU for ML tasks from within.

Kirk, Picard, or Janeway? Commander Erwin in Attack on Titan.

Transporter, Time Machine or Cloak of Invisibility? Practically, a transporter is probably the correct choice, but I dont think I could pass up a time machine, dangers aside. Its selfish, but Id really like to see what happens in the future. Can we manage climate change? Do we make it off planet? Interstellar? Maybe we unlock some greater mysteries of the universe. Could be exciting.

If someone gave me $1 million to launch a startup, I would try to use that money to convince many others with money to invest much more in all aspects of the climate problem. Theres no silver bullet, well need a whole battery of solutions.

I once waited in line for A Wii on release day. I spent 24-hours waiting in a Walmart layaway but it was high school so what else was I going to do?

Your role models: I have a lot of people I admire for specific qualities. Climate communicators like Dr. Marshall Shepard, Dr. Leah Stokes, and Dr. Ayana Elizabeth Johnson, who clarify, educate, and strongly advocate for positive change. Congresswoman Alexandria Ocasio-Cortez for her ability to engage and how shes radically realigned the assumptions on what makes a politician and what it takes to affect change. As full role models, Id have to pick my parents. Theyve displayed nothing but encouragement, generosity, love, and patience while raising me and my younger sister who has a disability. They seeded my curiosity, critical thinking, and self-expression, and I hope I can do as much for others as theyve done for me.

Greatest game in history It wasnt perfect, but Diablo II: Lord of Destruction was pretty formative for me. Online play, fantastic loot, and an incredibly dark and interesting world.

Best gadget ever: Raspberry Pi! So many potential uses!

First computer: A big beige Windows 95 box. I still remember the Jurassic Park screensaver my parents installed and not understanding anything that was going on in the included game, Magic Carpet.

Current phone: Pixel 2.

Favorite app: Twitter. It took a while to curate, but it has been an invaluable source of up-to-date info on current events at the local and national scale, science, and politics.

Favorite cause: Increasing and amplifying BIPOC representation in STEM, BLM.

Most important technology of 2020: I know its already been said, but I cant argue, its video conferencing.

Most important technology of 2022: I hope its a major breakthrough in energy storage tech that helps with our renewables revolution and rapid decarbonization.

Final words of advice for your fellow geeks: Get involved in something that makes you feel good and leaves something about the world better, even if only a bit. It can be hard to step outside of the comfort zone, but if the last few years have shown anything, its that we need stand up for one another because we need each other. Start small and build out from there. How else are we going to get to one of the rad high-tech post-scarcity worldlines?

Twitter: @frodre

LinkedIn: W. Andre Perkins

The rest is here:
Geek of the Week: Vulcans W. Andre Perkins uses machine learning to help predict climate change - GeekWire

Imaging AI and Machine Learning Beyond the Hype, Upcoming Webinar Hosted by Xtalks – PR Web

Learn what is available today in the current landscape, its applications for building efficiency and what is coming in the near future to help life science companies transform their clinical trial imaging.

TORONTO (PRWEB) November 30, 2020

For the first 125 years of medical imaging, technological advances focused primarily on new modes of imaging as technology progressed from the discovery of the X-ray in 1895 to ultrasounds, MRIs, PET and CT scans in the late 20th century. Now, arguably, the most notable advances are being made in how images from those technologies are securely shared, managed, stored and assessed. These advancements are largely due to the application of artificial intelligence (AI) and machine learning (ML) to imaging systems and data platforms.

Automation is improving virtually every stage of the imaging workflow, but there is a lot of hype concerning AI and ML in the marketplace. Companies have underestimated the challenge that complexity presents, and predictions of the end of radiologists have proven false multiple times.

Join experts from ICON Medical Imaging and Medidata for this webinar on the practical applications for AI and ML in clinical trial imaging and what is possible today. Learn what is available today in the current landscape, its applications for building efficiency and what is coming in the near future to help life science companies transform their clinical trial imaging.

Join Paul McCracken, Vice President, Head of Medical Imaging, ICON Medical Imaging; and Dan Braga, VP, Product Management, Acorn AI Product & Ecosystem, Medidata, in a live webinar on Wednesday, December 16, 2020 at 11am EST (8am PST).

For more information, or to register for this event, visit Imaging AI and Machine Learning Beyond the Hype.

ABOUT XTALKS

Xtalks, powered by Honeycomb Worldwide Inc., is a leading provider of educational webinars to the global life science, food and medical device community. Every year, thousands of industry practitioners (from life science, food and medical device companies, private & academic research institutions, healthcare centers, etc.) turn to Xtalks for access to quality content. Xtalks helps Life Science professionals stay current with industry developments, trends and regulations. Xtalks webinars also provide perspectives on key issues from top industry thought leaders and service providers.

To learn more about Xtalks visit http://xtalks.comFor information about hosting a webinar visit http://xtalks.com/why-host-a-webinar/

Share article on social media or email:

Link:
Imaging AI and Machine Learning Beyond the Hype, Upcoming Webinar Hosted by Xtalks - PR Web

Machine learning – it’s all about the data – KHL Group

When it comes to the construction industry machine learning means many things. However, at its core, it all comes back to one thing: data.

The more data that is produced through telematics, the more advanced artificial intelligence (AI) becomes, due to it having more data to learn from. The more complex the data the better for AI, and as AI becomes more advanced its decision-making improves. This means that construction is becoming more efficient thanks to a loop where data and AI are feeding into each other.

Machine learning is an application of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. As Jim Coleman, director of global IP at Trimble says succinctly, Data is the fuel for AI.

Artificial intelligence

Coleman expands on that statement and the notion that AI and data are in a loop, helping each other to develop.

The more data we can get, the more problems we can solve and the more processing we can throw on top of that, the broader set of problems well be able to solve, he comments.

Theres a lot of work out there to be done at AI and it all centres around this notion of collecting data, organising the data and then mining and evaluating that data.

Karthik Venkatasubramanian, vice president of data and analytics at Oracle Construction and Engineering agrees that data is key, saying: Data is the lifeblood for any AI and machine learning strategy to work. Many construction businesses already have data available to them without realising it.

This data, arising from previous projects and activities, and collected over a number of years, can become the source of data that machine learning models require for training. Models can use this existing data repository to train on and then compare against a validation test before it is used for real world prediction scenarios.

There are countless examples of machine learning at work in construction with a large number of OEMs having their own programmes in place, not to mention whats being worked on by specialist technology companies.

One of these OEMs is USA-based John Deere. Andrew Kahler, a product marketing manager for the company says that machine learning has expanded rapidly over the past few years and has multiple applications.

Machine learning will allow key decision makers within the construction industry to manage all aspects of their jobs more easily, whether in a quarry, on a site development job, building a road, or in an underground application. Bigger picture, it will allow construction companies to function more efficiently and optimise resources, says Kahler.

He also makes the point that a key step in this process is the ability for smart construction machines to connect to a centralised, cloud-based system John Deere has its JDLink Dashboard, and most of the major OEMs have their own equivalent system.

The potential for machine learning to unlock new levels of intelligence and automation in the construction industry is somewhat limitless. However, it all depends on the quality and quantity of data were able to capture, and how well were able to put it to use though smart machines.

USA-based Built Robotics was founded in 2016 to address what they saw as gap in the market the lack of technology being used across construction sites, especially compared to other industries. The company upgrade construction equipment with AI guidance systems, enabling them to operate fully autonomously.

The company typically works with equipment comprising excavators, bulldozers, and skid steer loaders. The equipment can only work autonomously on certain repetitive tasks; for more complex tasks an operator is required.

Erol Ahmed, director of communications at Built Robotics says that founder and CEO Noah Ready-Campbell wanted to apply robotics to where it would be really helpful and have a lot of change and impact, and thus settled on the construction industry.

Ahmed says that the company are the only commercial autonomous heavy equipment and construction company available. He adds that the business which operates in the US and has recently launched operations in Australia is focused on automating specific workflows.

We want to automate specific tasks on the job site, get them working really well. Its not about developing some sort of all-encompassing robot that thinks and acts like a human and can do anything you tell it to. It is focusing on specific things, doing them well, helping them work in existing workflows. Construction sites are very complicated, so just automating one piece is very helpful and provides a lot of productivity savings.

Hydraulic system

Ahmed confirms that as long as the equipment has an electronically controlled hydraulic system converting a, for example, Caterpillar, Komatsu or a Volvo excavator isnt too different. There is obviously interest in the company as in September 2019 the company announced it had received US$33 million in investment, bringing its total funding up to US$48 million.

Of course, a large excavator or a mining truck at work without an operator is always going to catch the eye, and our attention and imagination. They are perhaps the most visual aspect of machine learning on a construction site, but there are a host of other examples that are working away in the background.

As Trimbles Coleman notes, I think one of the interesting things about good AI is you might not know whats even there, right? You just appreciate the fact that, all of a sudden, theres an increase in productivity.

AI is used in construction for specific tasks, such as informing an operator when a machine might fail or isnt being used productively to a broader and more macro sense. For instance, for contractors planning on how best to construct a project there is software with AI that can map out the most efficient processes.

The AI can make predictions about schedule delays and cost overruns. As there is often existing data on schedule and budget performance this can used to make predictions and these predictions will get better over time. As we said before; the more data that AI has, the smarter it becomes.

Venkatasubramanian from Oracle adds that smartification is happening in construction, saying that: Schedules and budgets are becoming smart by incorporating machine learning-driven recommendations.

Supply chain selection is becoming smart by using data across disparate systems and comparing performance. Risk planning is also getting smart by using machine learning to identify and quantify risks from the past that might have a bearing on the present.

There is no doubt that construction has been slower than other industries to adopt new technology, but this isnt just because of some deep-seated reluctance to new ideas.

For example, agriculture has a greater application of machine learning but it is easier for that sector to implement it every year the task for getting in the crops on a farm will be broadly similar.

New challenges

As John Downey, director of sales EMEA, Topcon Positioning Group, explains: With construction theres a slower adoption process because no two projects or indeed construction sites are the same, so the technology is always confronted with new challenges.

Downey adds that as machine learning develops it will work best with repetitive tasks like excavation, paving or milling but thinks that the potential goes beyond this.

As we move forward and AI continues to advance, well begin to apply it across all aspects of construction projects.

The potential applications are countless, and the enhanced efficiency, improved workflows and accelerated rate of industry it will bring are all within reach.

Automated construction equipment needs operators to oversee them as this sector develops it could be one person for every three or five machines, or more, it is currently unclear. With construction facing a skills shortage this is an exciting avenue. There is also AI which helps contractors to better plan, execute and monitor projects you dont need to have machine learning type intelligence to see the potential transformational benefits of this when multi-billion dollar projects are being planned and implemented

Excerpt from:
Machine learning - it's all about the data - KHL Group

Even with machine learning, human judgment still required in fintech sector – The Irish News

A COMBINATION of recent events has seen a rapid acceleration in the adoption and incorporation of technologies by a wide range of firms and institutions in the global financial sector.

Whether this adoption has been spurred on by the global financial crisis of 2008; the need to adhere to regulation; or the immediate need to pivot and handle the consequences of Covid-19 and its impact on customers and staff, firms in the finance industry are embracing financial technologies (fintech) into their daily processes.

Designed to drive enhancement in services and improve efficiencies in back-office operations, it has seen a thriving sector developed beyond traditional Wall Street' financing.

The prospect of the part that machine learning (ML) could play is generating a lot of momentum.

The financial sector is well-placed to benefit from machine learning, with large volumes of historical structured and unstructured data to learn from. It is also open to implementing new technologies, as demonstrated by the early adoption of technologies such as algorithmic trading by investment banks in the 1980s.

Accordingly, a study by Forrester in 2019 estimates around half of financial services and insurance firms globally already use ML technologies. By using these technologies, significant and non-trivial savings have already been made. For instance, JPMorgan Chase has estimated their fraud detection solution, which uses machine learning to analyse stock market data, saves the bank $150m annually.

So, will machine learning completely automate human tasks in the finance sector? Probably not. Human judgment is still required to help with so-called edge cases', where no obvious outcome is clear, and associated decision-making.

In many ways, it represents a new synergy between human and machine. Machine learning systems can sift through enormous amounts of data and identify correlations. Human expertise is still required to tease out spurious links and noise from underlying informative signals. As highlighted by the Covid-19 pandemic, machine learning is highly capable in analysing large domain-specific data and identifying patterns to an expressed objective, but is slower to adapt to these rare black swan' events if they are not closely related to past trends.

On a positive note, using these tools alongside human judgment can improve the quality of data analysis for decision making and increase process efficiencies. Two such areas where machine learning is having an impact include fraud detection, and improvements in personalisation for customer service.

As we look ahead post-pandemic, we can expect to see the finance sector continuing to adopt machine learning technology to improve efficiencies and reduce costs across customer service, regulatory adherence, fraud detection and trading.

Machine learning strengthened with human expertise at this stage will aid in the development of more robust technology solutions.

:: Fiona Browne, head of AI at Datactics, will chair a panel discussion on artificial intelligence in financial services at AI Con this Thursday and Friday. To register for the conference go to: https://aicon2020.com/

See more here:
Even with machine learning, human judgment still required in fintech sector - The Irish News

LR partners with regulator to apply machine learning to mine insight. – Lloyd’s Register

Lloyds Register is working with the Health and Safety Executive (HSE) to bring together and analyse safety data on an unprecedented scale, powered by its latest digital innovation - Severity Scanner.

The tool has been enabled by Discovering Safety which has allowed unprecedented access to lessons learnt from past incidents in order to make the working world safer and prevent death and serious injury. The initiative will also minimise human effort and error in the RIDDOR (Reporting of Injuries, Diseases and Dangerous Occurrences Regulations) reporting process, helping businesses to fulfil their regulatory obligations to protect themselves from potential disruption, investigation and fines.

The programme is funded by the Lloyds Register Foundation, and leverages the technology behind our original solution SafetyScanner which applies machine learning to large datasets to provide health, safety and environment professionals with actionable insights. This will be applied to datasets from the Health and Safety Executive with a pilot project being carried out in partnership with hospitality business, Mitchells & Butlers. The pilot project ensures that the algorithm and the tool's machine learning capabilities correctly analyse the data.

Ran Merkazy, VP, Product & Services Innovation at Lloyds Register comments on the programme: "At its heart, this is about creating a pool of data on a scale never seen before, and using the latest in AI to mine that for valuable safety insights. We are combining more than 40 years of data from the HSE with other sources around the world, and the pilot with M&B is teaching us a lot about how businesses can use these insights to support their operations."

Professor Andrew Curran, Chief Scientific Advisor, HSE said: "Our collaboration with Lloyd's Register Foundation on Discovering Safety will have a huge impact on our ability to learn from incidents and accidents. It's our next step towards a world where no one dies as a consequence of work, where industry doesn't suffer catastrophic failure, where companies can say that no one was harmed in the making of their product and where accidents can be predicted and therefore prevented."

Once established and operational, Discovering Safety's knowledge library of safety information will have a global reach and impact on a scale never previously achieved in the domain of health and safety. Since its initial launch in 2018, Discovering Safety has already provided:

See original here:
LR partners with regulator to apply machine learning to mine insight. - Lloyd's Register

The Way We Train AI is Fundamentally Flawed Machine Learning Times – The Predictive Analytics Times

Its no secret that machine-learning models tuned and tweaked to near-perfect performance in the lab often fail in real settings. This is typically put down to a mismatch between the data the AI was trained and tested on and the data it encounters in the world, a problem known as data shift. For example, an AI trained to spot signs of disease in high-quality medical images will struggle with blurry or cropped images captured by a cheap camera in a busy clinic.

Now a group of 40 researchers across seven different teams at Google have identified another major cause for the common failure of machine-learning models. Called underspecification, it could be an even bigger problem than data shift. We are asking more of machine-learning models than we are able to guarantee with our current approach, says Alex DAmour, who led the study.

Visit link:
The Way We Train AI is Fundamentally Flawed Machine Learning Times - The Predictive Analytics Times

Machine Learning-as-a-Service (MLaaS) Market 2020; Region Wise Analysis of Top – News by aeresearch

The key focus of Machine Learning-as-a-Service (MLaaS) market report is to evaluate the performance of the industry in the ensuing years to help stakeholders take better decisions and expand their business portfolio. The document highlights the key growth trends as well as the opportunities and how they can be exploited to generate maximum profits. In addition, it empowers industry partakers with methodologies that can be adopted to effectively deal with the existing and upcoming challenges. Besides, it gauges the impact of COVID-19 on this business sphere and attempts to monitor its future implications on the market scenario for a stronger realization of the growth prospects.

Key pointers from COVID-19 impact assessment:

Summary of the regional analysis:

Request Sample Copy of this Report @ https://www.aeresearch.net/request-sample/375460

Other crucial pointers from the Machine Learning-as-a-Service (MLaaS) market report:

Key features of the report:

Highlights of the Report:

The scope of the Report:

The report offers a complete company profiling of leading players competing in the global Machine Learning-as-a-Service (MLaaS) marketwith a high focus on the share, gross margin, net profit, sales, product portfolio, new applications, recent developments, and several other factors. It also throws light on the vendor landscape to help players become aware of future competitive changes in the global Machine Learning-as-a-Service (MLaaS) market.

Reasons to Buy the Report:

Table of Contents:

Industry Overview of Machine Learning-as-a-Service (MLaaS) Market

Industry Chain Analysis of Machine Learning-as-a-Service (MLaaS) Market

Manufacturing Technology of Machine Learning-as-a-Service (MLaaS) Market

Major Manufacturers Analysis of Machine Learning-as-a-Service (MLaaS) Market

Global Productions, Revenue and Price Analysis of Machine Learning-as-a-Service (MLaaS) Market by Regions, Manufacturers, Types, and Applications

Consumption Volumes, Consumption Value, Import, Export and Sale Price Analysis of Machine Learning-as-a-Service (MLaaS) by Regions

Gross and Gross Margin Analysis of Machine Learning-as-a-Service (MLaaS) Market

Marketing Traders or Distributor Analysis of Machine Learning-as-a-Service (MLaaS) Market

Global and Chinese Economic Impacts on Machine Learning-as-a-Service (MLaaS) Industry

Development Trend Analysis of Machine Learning-as-a-Service (MLaaS) Market

Contact information of Machine Learning-as-a-Service (MLaaS) Market

New Project Investment Feasibility Analysis of Machine Learning-as-a-Service (MLaaS) Market

Conclusion of the Global Machine Learning-as-a-Service (MLaaS) Market Industry 2020 Market Research Report

Request Customization on This Report @ https://www.aeresearch.net/request-for-customization/375460

Follow this link:
Machine Learning-as-a-Service (MLaaS) Market 2020; Region Wise Analysis of Top - News by aeresearch

Machine learning could have role in pain detection in horses – study – Horsetalk

A screen shot of a video of a horse in the study with the visible predicted marker nose (green), withers (red) and tail (blue). Photo: Kil et al. https://doi.org/10.3390/ani10122258

Automated video tracking of stabled horses is a promising tool that, when combined with machine learning, could successfully track pain-related behaviour, according to researchers.

Such a system would be especially useful in a clinical setting, monitoring unwell horses or those recovering from surgery.

Researchers with the University of Veterinary Medicine Vienna set out to evaluate how a video-based automatic tracking tool performed in recognising the activity of stabled horses in a hospital setting.

Nuray Kil, Katrin Ertelt and Ulrike Auer, writing in the journal Animals, said it was well established in veterinary medicine that pain triggers behavioural changes in animals.

Detailed knowledge of both normal and pain-related behaviours in equines is crucial to properly evaluate pain.

Although the presence of strangers or unfamiliar surroundings may mask pain-related changes, even subtle variations may become apparent if behaviour is thoroughly analysed, they said.

In horses, pain is typically scored manually, they said.

Various pain assessment scales, such as the Composite Pain Score and the Horse Grimace Scale, have been developed and proven useful in the assessment of postoperative pain.

However, all methods have limitations and present practical challenges, they noted. For example, horses may be seen only for a short time, and inexperience by the observer may increase the risk of underestimating pain.

A total of 34 horses were used in the study. All were patients of the universitys equine teaching hospital and were housed in box stalls with free access to water, and roughage feed four times a day.

Video recordings were taken using an action camera and a time-lapse mode.

The videos were processed using the convolutional neural network Loopy for automated prediction of three body parts the nose, withers and tail. Development of the model was carried out in several steps.

Ultimately, the body parts were detected with a sensitivity of more than 80% and an error rate between 2% and 7%, depending on the body part. Put simply, the technology was able to identify the pose of the horses with an accuracy and sensitivity of more than 80%.

The results provide a crucial step toward developing algorithms for the automated recognition of behaviour through machine learning.

In the long term, this technology will not only improve the detection of acute and chronic pain in veterinary medicine, but also provide improved and new insights for behavioural research in horses, they said.

The findings will help to develop the automated detection of daily activity, to meet the ultimate objective of objectively assessing the pain and wellbeing of horses.

The study team gave examples of the kinds of insights possible with automated tracking.

For example, the position of a horse in the box in relation to the door can be determined over a longer period of time, or frequent weight-shifting during rest could be detected through nose movement.

The addition of other markers, such as the hooves or ears, would improve the observation of behaviour, they said.

Kil, N.; Ertelt, K.; Auer, U. Development and Validation of an Automated Video Tracking Model forStabledHorses. Animals 2020, 10, 2258.

The study, published under a Creative Commons License, can be read here.

Go here to see the original:
Machine learning could have role in pain detection in horses - study - Horsetalk