Berkeley Lab Technologies Honored With 7 R&D 100 Awards – Lawrence Berkeley National Laboratory

Innovative technologies from Lawrence Berkeley National Laboratory (Berkeley Lab) to achieve higher energy efficiency in buildings, make lithium batteries safer and higher performing, and secure quantum communications were some of the inventions honored with R&D 100 Awards by R&D World magazine.

For more than 50 years, the annual R&D 100 Awards have recognized 100 technologies of the past year deemed most innovative and disruptive by an independent panel of judges. The full list of winners, announced by parent company WTWH Media LLC is available at the R&D World website.

Berkeley Labs award-winning technologies are described below.

A Tool to Accelerate Electrochemical and Solid-State Innovation

(from left) Adam Weber, New Danilovic, Douglas Kushner, and John Petrovick (Credit: Berkeley Lab)

Berkeley Lab scientists invented a microelectrode cell to analyze and test electrochemical systems with solid electrolytes. Thanks to significant cost and performance advantages, this tool can accelerate development of critical applications such as energy storage and conversion (fuel cells, batteries, electrolyzers), carbon capture, desalination, and industrial decarbonization.

Solid electrolytes have been displacing liquid electrolytes as the focus of electrochemical innovation because of their performance, safety, and cost advantages. However, the lack of effective methods and equipment for studying solid electrolytes has hindered advancement of the technologies that employ them. This microelectrode cell meets the testing needs, and is already being used by Berkeley Lab scientists.

The development team includes Berkeley Lab researchers Adam Weber, Nemanja Danilovic, Douglas Kushner, and John Petrovick.

Matter-Wave Modulating Secure Quantum Communicator (MMQ-Com)

Information transmitted by MMQ-Com is impervious to security breaches. (Credit: Alexander Stibor/Berkeley Lab)

Quantum communication, cybersecurity, and quantum computing are growing global markets. But the safety of our data is in peril given the rise of quantum computers that can decode classical encryption schemes.

The Matter-Wave Modulating Secure Quantum Communicator (MMQ-Com) technology is a fundamentally new kind of secure quantum information transmitter. It transmits messages by modulating electron matter-waves without changing the pathways of the electrons. This secure communication method is inherently impervious to any interception attempt.

A novel quantum key distribution scheme also ensures that the signal is protected from spying by other quantum devices.

The development team includes Alexander Stibor of Berkeley Labs Molecular Foundry along with Robin Rpke and Nicole Kerker of the University of Tbingen in Germany.

Solid Lithium Battery Using Hard and Soft Solid Electrolytes

(from left) Marca Doeff, Guoying Chen, and Eongyu Yi (Credit: Berkeley Lab)

The lithium battery market is expected to grow from more than $37 billion in 2019 to more than $94 billion by 2025. However, the liquid electrolytes used in most commercial lithium-ion batteries are flammable and limit the ability to achieve higher energy densities. Safety issues continue to plague the electronics markets, as often-reported lithium battery fires and explosions result in casualties and financial losses.

In Berkeley Labs solid lithium battery, the organic electrolytic solution is replaced by two solid electrolytes, one soft and one hard, and lithium metal is used in place of the graphite anode. In addition to eliminating battery fires, incorporation of a lithium metal anode with a capacity 10 times higher than graphite (the conventional anode material in lithium-ion batteries) provides much higher energy densities.

The technology was developed by Berkeley Lab scientists Marca Doeff, Guoying Chen, and Eongyu Yi, along with collaborators at Montana State University.

Porous Graphitic Frameworks for Sustainable High-Performance Li-Ion Batteries

High-resolution transmission electron microscopy images of the Berkeley Lab PGF cathode reveal (at left) a highly ordered honeycomb structure within the 2D plane, and (at right) layered columnar arrays stacked perpendicular to the 2D plane. (Credit: Yi Liu/Berkeley Lab)

The Porous Graphitic Frameworks (PGF) technology is a lithium-ion battery cathode that could outperform todays cathodes in sustainability and performance.

In contrast to commercial cathodes, organic PGFs pose fewer risks to the environment because they are metal-free and composed of earth-abundant, lightweight organic elements such as carbon, hydrogen, and nitrogen. The PGF production process is also more energy-efficient and eco-friendly than other cathode technologies because they are prepared in water at mild temperatures, rather than in toxic solvents at high temperatures.

PGF cathodes also display stable charge-discharge cycles with ultrahigh capacity and record-high energy density, both of which are much higher than all commercial inorganic cathodes and organic cathodes known to exist.

The development team includes Yi Liu and Xinie Li of Berkeley Labs Molecular Foundry, as well as Hongxia Wang and Hao Chen of Stanford University.

Building Efficiency Targeting Tool for Energy Retrofits (BETTER)

The buildings sector is the largest source of primary energy consumption (40%) and ranks second after the industrial sector as a global source of direct and indirect carbon dioxide emissions from fuel combustion. According to the World Economic Forum, nearly one-half of all energy consumed by buildings could be avoided with new energy-efficient systems and equipment.

(from left) Carolyn Szum (Lead Researcher), Han Li, Chao Ding, Nan Zhou, Xu Liu (Credit: Berkeley Lab)

The Building Efficiency Targeting Tool for Energy Retrofits (BETTER) allows municipalities, building and portfolio owners and managers, and energy service providers to quickly and easily identify the most effective cost-saving and energy-efficiency measures in their buildings. With an open-source, data-driven analytical engine, BETTER uses readily available building and monthly energy data to quantify energy, cost, and greenhouse gas reduction potential, and to recommend efficiency interventions at the building and portfolio levels to capture that potential.

It is estimated that BETTER will help reduce about 165.8 megatons of carbon dioxide equivalent (MtCO2e) globally by 2030. This is equivalent to the CO2 sequestered by growing 2.7 billion tree seedlings for 10 years.

The development team includes Berkeley Lab scientists Nan Zhou, Carolyn Szum, Han Li, Chao Ding, Xu Liu, and William Huang, along with collaborators from Johnson Controls and ICF.

AmanziATS: Modeling Environmental Systems Across Scales

Simulated surface and subsurface water from Amanzi-ATS hydrological modeling of the Copper Creek sub-catchment in the East River, Colorado watershed. (Credit: Zexuan Xu/Berkeley Lab, David Moulton/Los Alamos National Laboratory)

Scientists use computer simulations to predict the impact of wildfires on water quality, or to monitor cleanup at nuclear waste remediation sites by portraying fluid flow across Earth compartments. The Amanzi-Advanced Terrestrial Simulator (ATS) enables them to replicate or couple multiple complex and integrated physical processes controlling these flowpaths, making it possible to capture the essential physics of the problem at hand.

Specific problems require taking an individual approach to simulations, said Sergi Molins, principal investigator at Berkeley Lab, which contributed expertise in geochemical modeling to the softwares development. Physical processes controlling how mountainous watersheds respond to disturbances such as climate- and land-use change, extreme weather, and wildfire are far different than the physical processes at play when an unexpected storm suddenly impacts groundwater contaminant levels in and around a nuclear remediation site. Amanzi-ATS allows scientists to make sense of these interactions in each individual scenario.

The code is open-source and capable of being run on systems ranging from a laptop to a supercomputer. Led by Los Alamos National Laboratory, Amanzi-ATS is jointly developed by researchers from Los Alamos National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory, and Berkeley Lab researchers including Sergi Molins, Marcus Day, Carl Steefel, and Zexuan Xu.

Institute for the Design of Advanced Energy Systems (IDAES)

The U.S. Department of Energys (DOEs) Institute for the Design of Advanced Energy Systems (IDAES) project develops next-generation computational tools for process systems engineering (PSE) of advanced energy systems, enabling their rapid design and optimization.

IDAES Project Team (Credit: Berkeley Lab)

By providing rigorous modeling capabilities, the IDAES Modeling & Optimization Platform helps energy and process companies, technology developers, academic researchers, and DOE to design, develop, scale-up, and analyze new and potential PSE technologies and processes to accelerate advances and apply them to address the nations energy needs. The IDAES platform is also a key component in the National Alliance for Water Innovation, a $100 million, five-year DOE innovation hub led by Berkeley Lab, which will examine the critical technical barriers and research needed to radically lower the cost and energy of desalination.

Led by National Energy Technology Laboratory, IDAES is a collaboration with Sandia National Laboratories, Berkeley Lab, West Virginia University, Carnegie Mellon University, and the University of Notre Dame. The development team at Berkeley Lab includes Deb Agarwal, Oluwamayowa (Mayo) Amusat, Keith Beattie, Ludovico Bianchi, Josh Boverhof, Hamdy Elgammal, Dan Gunter, Julianne Mueller, Jangho Park, Makayla Shepherd, Karen Whitenack, and Perren Yang.

# # #

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 13 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Labs facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energys Office of Science.

DOEs Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

See original here:
Berkeley Lab Technologies Honored With 7 R&D 100 Awards - Lawrence Berkeley National Laboratory

How Is Artificial Intelligence Used In B2B Companies: Here Are Powerful Examples – Forbes

Theres often a misconception that artificial intelligence (AI) is only applicable to businesses in the B2C space. It's thought that since B2C companies have more customers, they have more data to leverage to make AI impactful. However, this couldn't be further from the truth. AI is as relevant to a B2B company as it is to a B2C company.It's important for every B2B company to evaluate the ways AI can help them produce better products, provide better services, and improve business processes. Here are just a few of the ways some B2B companies are using artificial intelligence today.

How Is Artificial Intelligence Used In B2B Companies: Here Are Powerful Examples

Better Products

One of the ways artificial intelligence supports B2B companies is by helping to create better products. In healthcare, AI is behind many innovative tools such as the CT scanner created by Siemens Healthineers that is powered by AI algorithms. This tool assists radiologists with step-by-step instructions specific to each patient to acquire the best images possible. With the support of AI, clinical decision-making can be more quantitative and accurate. Artificial intelligence also enables predictive maintenance, which helps improve product performance and reduces costly downtime.

Better Services

There are many businesses that use artificial intelligence to provide better services to their clients. At Autodesk, intelligent design software changes how to design and build things. With machine learning, Autodesk provides its customers in manufacturing, architecture, engineering, and construction automated generative design technology. Artificial intelligence is also behind Salesforce Einstein, a customer relationship management system that features machine learning, natural language processing capability, and predictive analytics. This can help catapult the sales and marketing services of businesses. When it comes to transaction processing and data warehousing Oracles autonomous databases help reduce operational costs.

Better Business Processes

One of the most powerful ways artificial intelligence supports any company, including B2B organizations, is by improving business processes. B2B businesses must adopt AI technologies if they want to remain competitive in their industries. AI is changing the game with business processes from support services such as marketing, recruiting/human resources, finance, manufacturing, and more. In addition, automation of a variety of tasks in multiple functions also helps reduce costs.

Manufacturing

When it comes to manufacturing, artificial intelligence and machine learning have been revolutionary. Everything from supply chain and inventory management to predictive maintenance was improved or resulted from Industry 4.0 innovations for manufacturing. With the adoption of machine learning, McKinsey predicted a drop in forecasting errors by as much as 50 percent, and there are similar cost and time savings throughout every stage of the manufacturing process.

Marketing

Once artificial intelligence is put to work in the sales and marketing processes, data from websites, social media accounts, and contact databases can be analyzed for insights to help improve the number of leads generated as well as the quality of those leads. The hyper-personalization of marketing campaigns that's possible thanks to AI technology and machine learning can also boost B2B business results in part because relevant content can be delivered at the right time. Now that chatbots and other AI-powered communication systems can provide customer service 24/7, theres a lot of heavy lifting already completed before your human workforce needs to engage with your customers.

Human Resources

Is there room for AI in human resources? Absolutely! While the HR function is focused on humans, the reality is that AI can help optimize and analyze HR efforts just as it does for other disciplines. If youve applied for a job using a digital platform, youve experienced AI at work. Algorithms scan your credentials to spot relevant terms on your resume that might indicate a fit for the open position. But its not just recruiting that benefits from AI integration. Insights from AI can help HR departments understand employee referrals and analyze feedback from employees to make data-driven decisions. Responding effectively and accurately to employee feedback can improve the employee experience.

Finance

The finance arena is a natural fit for AI and machine learning applications since they already rely heavily on digital workflows and databases. Certainly, one of the biggest cost centers and areas that AI can help out finance departments is in the prevention and detection of fraud. AI is able to quickly process and learn from historical data to apply that learning to current reality and spot fraud. Also, AI can automate many mundane tasks associated with finance and accounting to free up human professionals to do tasks they are more qualified for.

Robotic Process Automation

Another way B2B companies can adopt AI technologies is through robotic process automation (RBA)basically, RBA automates the tasks of workers. Solutions such as those from Automation Anywhere makes it easy for any companynot just those leading in the tech space or who have tech talent on staffto get an out-of-the-box solution for RBA.

As you can see, there are a plethora of ways B2B companies can use artificial intelligence to their advantage. To learn more about how Ai is reshaping our world have a look at my new book, The Intelligence Revolution: Transforming Your Business With AI.

View original post here:
How Is Artificial Intelligence Used In B2B Companies: Here Are Powerful Examples - Forbes

There is already a beer created by Artificial Intelligence – Entrepreneur

Deeper is a drink produced in Switzerland with the assistance of AI.

Stay informed and join our daily newsletter now!

October2, 20202 min read

Technology has become a large part of our lives and with it Artificial Intelligence (AI) has intruded into our daily lives, so much so that with the help of it we have been able to create products that man normally makes.

In this context, a Swiss company launched Deeper, the first beer in the European country created with the assistance of AI. The recipe for the drink was made by the algorithm known as Brauer AI.

Photo: brauer.ai

To carry out this project, the creators chose the type of Indian Pale Ale beer, subsequently the algorithm analyzed market trends and an international database with around 157 thousand recipes to choose the type of malt and hops to use.

The MN Brew microbrewery, the University of Lucerne and the Jaywalker Digital company participated in the creation of this product. On the official page of the drink, the little legend explains "we believe in the power of merging human wisdom with artificial intelligence."

See more here:
There is already a beer created by Artificial Intelligence - Entrepreneur

Artificial intelligence in pulmonary medicine: computer vision, predictive model and COVID-19 – DocWire News

This article was originally published here

Eur Respir Rev. 2020 Oct 1;29(157):200181. doi: 10.1183/16000617.0181-2020. Print 2020 Sep 30.

ABSTRACT

Artificial intelligence (AI) is transforming healthcare delivery. The digital revolution in medicine and healthcare information is prompting a staggering growth of data intertwined with elements from many digital sources such as genomics, medical imaging and electronic health records. Such massive growth has sparked the development of an increasing number of AI-based applications that can be deployed in clinical practice. Pulmonary specialists who are familiar with the principles of AI and its applications will be empowered and prepared to seize future practice and research opportunities. The goal of this review is to provide pulmonary specialists and other readers with information pertinent to the use of AI in pulmonary medicine. First, we describe the concept of AI and some of the requisites of machine learning and deep learning. Next, we review some of the literature relevant to the use of computer vision in medical imaging, predictive modelling with machine learning, and the use of AI for battling the novel severe acute respiratory syndrome-coronavirus-2 pandemic. We close our review with a discussion of limitations and challenges pertaining to the further incorporation of AI into clinical pulmonary practice.

PMID:33004526 | DOI:10.1183/16000617.0181-2020

See the article here:
Artificial intelligence in pulmonary medicine: computer vision, predictive model and COVID-19 - DocWire News

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI) – Forbes

Technical skills and data literacy are obviously important in this age of AI, big data, and automation. But that doesn't mean we should ignore the human side of work skills in areas that robots can't do so well. I believe these softer skills will become even more critical for success as the nature of work evolves, and as machines take on more of the easily automated aspects of work. In other words, the work of humans is going to become altogether more, well, human.

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI)

With this in mind, what skills should employees be looking to cultivate going forward? Here are nine soft skills that I think are going to become even more precious to employers in the future.

1. Creativity

Robots and machines can do many things, but they struggle to compete with humans when it comes to our ability to create, imagine, invent, and dream. With all the new technology coming our way, the workplaces of the future will require new ways of thinking making creative thinking and human creativity an important asset.

2. Analytical (critical) thinking

As well as creative thinking, the ability to think analytically will be all the more precious, particularly as we navigate the changing nature of the workplace and the changing division of labor between humans and machines. That's because people with critical thinking skills can come up with innovative ideas, solve complex problems and weigh up the pros and cons of various solutions all using logic and reasoning, rather than relying on gut instinct or emotion.

3. Emotional intelligence

Also known as EQ (as in, emotional IQ), emotional intelligence describes a person's ability to be aware of, control, and express their own emotions and be aware of the emotions of others. So when we talk about someone who shows empathy and works well with others, were describing someone with a high EQ. Given that machines cant easily replicate humans ability to connect with other humans, it makes sense that those with high EQs will be in even greater demand in the workplace.

4. Interpersonal communication skills

Related to EQ, the ability to successfully exchange information between people will be a vital skill, meaning employees must hone their ability to communicate effectively with other people using the right tone of voice and body language in order to deliver their message clearly.

5. Active learning with a growth mindset

Someone with a growth mindset understands that their abilities can be developed and that building skills leads to higher achievement. They're willing to take on new challenges, learn from their mistakes, and actively seek to expand their knowledge. Such people will be much in demand in the workplace of the future because, thanks to AI and other rapidly advancing technologies, skills will become outdated even faster than they do today.

6. Judgement and decision making

We already know that computers are capable of processing information better than the human brain, but ultimately, it's humans who are responsible for making the business-critical decisions in an organization. It's humans who have to take into account the implications of their decisions in terms of the business and the people who work in it. Decision-making skills will, therefore, remain important. But there's no doubt that the nature of human decision making will evolve specifically, technology will take care of more menial and mundane decisions, leaving humans to focus on higher-level, more complex decisions.

7. Leadership skills

The workplaces of the future will look quite different from today's hierarchical organizations. Project-based teams, remote teams, and fluid organizational structures will probably become more commonplace. But that won't diminish the importance of good leadership. Even within project teams, individuals will still need to take on leadership roles to tackle issues and develop solutions so common leadership traits like being inspiring and helping others become the best versions of themselves will remain critical.

8. Diversity and cultural intelligence

Workplaces are becoming more diverse and open, so employees will need to be able to respect, understand, and adapt to others who might have different ways of perceiving the world. This will obviously improve how people interact within the company, but I think it will also make the businesss services and products more inclusive, too.

9. Embracing change

Even for me, the pace of change right now is startling, particularly when it comes to AI. This means people will have to be agile and cultivate the ability to embrace and even celebrate change. Employees will need to be flexible and adapt to shifting workplaces, expectations, and required skillsets. And, crucially, they'll need to see change not as a burden but as an opportunity to grow.

Bottom line: we needn't be intimated by AI. The human brain is incredible. It's far more complex and more powerful than any AI in existence. So rather than fearing AI and automation and the changes this will bring to workplaces, we should all be looking to harness our unique human capabilities and cultivate these softer skills skills that will become all the more important for the future of work.

AI is going to impact businesses of all shapes and sizes across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

See the article here:
9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI) - Forbes

Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports – Journal of Petroleum Technology

Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports

You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.

To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT

In well-drilling activities, successful execution of a sequence of operations defined in a well project is critical. To provide proper monitoring, operations executed during drilling procedures are reported in daily drilling reports (DDRs). The complete paper provides an approach using machine-learning and sequence-mining algorithms for predicting and classifying the next operation based on textual descriptions. The general goal is to exploit the rich source of information represented by the DDRs to derive methodologies and tools capable of performing automatic data-analysis procedures and assisting human operators in time-consuming tasks.

Classification Tasks. fastText. This is a library discussed in the literature designed to learn word embeddings and text classification. The technique implements a simple linear model with rank constraint, and the text representation is a hidden state that is used to feed classifiers. A softmax function computes the probability distribution over predefined classes.

Conditional Random Fields (CRFs). CRFs are a category of undirected graphical models that allow combination of features from each timestep of the sequence, with the ability to transit between labels for each episode in the input sequence. They were proposed to overcome the problem of bias that existed in techniques such as hidden Markov models and maximum-entropy Markov models.

Recurrent Models. Despite achieving good results in several scenarios and learning word embeddings as a byproduct of its training, the fastText classifier does not properly consider word-ordering information that can be useful for several classification tasks. Such a shortcoming can be addressed by a recurrent neural network (RNN), which considers the fact that a fragment of text is formed by an ordered sequence of words. The authors consider the gated recurrent unit variant, which is easier to train than traditional RNNs and achieves results comparable with those of the long short-term memory unit, while figuring fewer parameters to learn. The methodology of these classifiers is detailed mathematically in the complete paper.

Sequence Prediction. Sequential pattern mining can be defined broadly by the task of discovering interesting subsequences in a set of sequences, where the level of interest can be measured in terms of various criteria such as occurrence frequency, length, and profit, according to the application. The authors focus in this paper on the specific task of sequence prediction.

In the scenario considered, the alphabet is given by an ontology of operations of drilling activities. The sequence is defined according to data stored in DDRs. The proposed methodology considers various sequence prediction algorithms, specifically the following:

These algorithms are detailed in the complete paper. The sequential pattern mining framework (SPMF) was used for algorithm implementation. SPMF is an open-source data-mining library specialized in frequent pattern mining.

Data Sets. The data sets used for the experiments reported in this paper were extracted from different collections of DDRs. Each DDR entry is a record containing a rich set of information about the event being reported, which could be an operation or an occurrence. Two different types of data sets were generated, the operations data sets and the cost data set. The former is used by both classification and sequence prediction tasks, whereas the latter is only used for classification.

Operations Data Sets. The operations data sets were extracted from DDRs of 119 different wellbores, which comprise more than 90,000 entries. The DDR fields of most interest for the experiments applied on this collection are the description and the operation name. The former is a special field used by the reporter to fill in important details about the event in a free-text format. The latter is selected by the reporter from a predefined list of operation names.

For the sequence-mining tasks, only the operation name is used. The data set is viewed as a set of sequences of operations, one for each wellbore. For the classification tasks, both fields are used for supervised learning, with the description as input object and the operation name as label.

The DDRs were preprocessed by an industry specialist with the objective of, first, removing the inconsistencies and, second, normalizing operation names to unify operations that shared semantics. Given the large number of documents, the strategy used for the former objective was to remove entries with the wrong operation name (instead of fixing each one, which would be a much harder task). As for the second objective, after an analysis of the list of operation names and samples of descriptions, each group of overlapping operations was transformed into a single operation.

This process yielded a resulting data set containing more than 38,000 samples and 39 operation types for the classification task and another containing more than 51,000 samples and 41 operations types for the sequence-prediction task.

Costs Data Sets. The costs data set is a collection of DDRs with an extra field (the target field) meant to be used for calculating the cost of each operation performed in a wellbore project. That field usually is multivalued because more than one activity of interest being described might exist in the free-text field of a DDR entry. Each value in that list is a pair containing two types of information: a label for the activity described in the entry and a number pointing to a diameter value.

As opposed to the operations data set, the target field was filled on land by a small group of employees trained specifically for this task. Nevertheless, the costs data set still had to be preprocessed before use in the experiments.

Classification Results. Before evaluating the models, the best values for each hyperparameter are determined using the validation set through a grid search. The proposed models are trained for 30 epochs.

The experimental results regarding accuracy and macro-F1 measures for the costs and operations data sets are presented in the complete paper. In both cases, the fastText classifier, despite being quite simple, yields significant results, posing a strong baseline for the proposed models. Nevertheless, one should recall that the word vectors learned by this first classifier are used as the proposed model embeddings as well.

The other neural networks also consider the complete word ordering in the samples, allowing them to achieve results better than the baseline. Such metrics are further improved by replacing the traditional Softmax layer in the output layer by a CRF. This allows the model to label each entry in the segment not only based on its extracted characteristics but also with respect to the operations ordering. This allows the model to improve the baseline accuracy by 10.94 and 3.85% in the cost and operations data sets, respectively. The proposed model learns not only the most relevant characteristics from each sample but also the patterns in the sequence of operations performed in a well-drilling project.

Sequence-Mining Results. The data set was divided into 10 segments, and the methods were evaluated according to a cross-validation protocol. The cross-validation protocol varies the training and testing data through various executions in order to reduce any bias in the results. For the classification tasks, approaches based on word embeddings and CRFs are exploited. Evaluations were made considering sequences from size 5 to 10 in the data set, using the sequence-prediction methods to predict the next drilling operation.

Table 1 presents the accuracy obtained when considering the sequences of operations as presented in the data set. Table2 shows the accuracy obtained when removing consecutive drilling operations from the data. The data set contains multiple repetitions of operations, contiguous to one another. This makes the data more predictable to the sequence prediction model and explains the higher accuracy obtained in experiments shown in Table 1.

DDR Processing Framework. To make the models discussed available for use in a real-world scenario, a framework is proposed that allows the end user to upload DDRs and analyze them by different applications, one for each specific purpose. One great advantage of using this framework is that the user feeds data once and then has access to several tools for analyzing them.

Currently, a working version of an application for performing the classification tasks already has been implemented. It encapsulates the classification models generated with the experiments and allows the processing of a large number of DDRs, either for operation or cost classification.

See the article here:
Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports - Journal of Petroleum Technology

Cleaner air on motorways thanks to matrix signs with artificial intelligence – Innovation Origins

When matrix signs start blinking above a motorway, the average commuter already knows what time it is. Traffic jams. Although the adjusted speed limit does help to improve traffic flow. There is one more indirect effect of this dynamic traffic flow management system: Fewer emissions due to improved traffic flow. That is why a trial is starting in Germany where environmental data will be incorporated into traffic flow management.

The air quality in the vicinity of motorways could also be considerably improved in the Netherlands. This is why the government wants to tackle this problem through the National Air Quality Cooperation Program (NSL). For example, by promoting electric vehicles or offering alternative means of transport. Air quality values around motorways can be found via this link.

According to German scientists, the incorporation of environmental data into traffic flow management can reduce noise and pollution. They are now going to research this in the U-SARAH live project, coordinated by the Karlsruhe Institute of Technology (KIT). The Ministry of Infrastructure and Water Management of the Netherlands is funding the project to the tune of almost 1.1 million.

The aim of this study is to optimize and implement an environmental control system in an existing traffic route control system so as to reduce the environmental impact on the sections in question, explains Professor Peter Vortisch, head of the KIT Institute for Transport.

A microscopic traffic flow model developed over the course of a preliminary study with our partner Hessen Mobil enables the effects of the newly developed environmental control system to be simulated. This makes it possible to optimize the control system in such a way that both traffic flow and environmental effects are taken into account.

We want to test and evaluate the new control system under real conditions in a practical test, says Matthias Glatz. He is a project manager at Hessen Mobil. EDI GmbH, a spin-off of KIT, uses the extensive traffic data to model the road users reactions to dynamic speed limits by using artificial intelligence (AI). On the basis of this data, we plan to develop an AI-based acceptance model and a prediction model as modules for guiding the SBA, says Dr. Thomas Freudenmann. He is one of the founders and managing director of EDI GmbH. The existing control system will be expanded with these modules.

The simulation model developed in U-SARAH live can be used in future both for quality management and the optimization of route control systems. The results of the project will benefit not only the population, public authorities, and scientific institutes but also all manufacturers of traffic control systems. Thanks to the AI-based approach, the traffic situation can be estimated a few minutes in advance so that traffic can be controlled even better. The simulation-based development facilitates the easy integration of emissions data into traffic control systems. And without incurring high acquisition costs for measuring technology, explains Sebastian Buck of the KIT Institute for Transport.

By reducing emissions and optimizing the flow of traffic, the economic damage caused by traffic congestion and excess emissions can be reduced. An analysis platform developed within the project will help to examine the large data files from different angles across all steps. The platform will be made available to the public via the Ministry of Transports data cloud.

See the original post:
Cleaner air on motorways thanks to matrix signs with artificial intelligence - Innovation Origins

How do we govern artificial intelligence and act ethically? – Open Access Government

The world has evolved rapidly in the last few years and artificial intelligence (AI) has often been leading the change. The technology has been adopted by almost every industry with companies wanting to explore how AI can automate processes, increase efficiency, and improve business operations.

AI has certainly proved how it can be beneficial to us all, but a common misconception is that it is always objective and avoids bias, opinion, and ideologies. Based on this understanding, there has been a rise in recent years with companies utilising AI-based recruiting platforms in a bid to make the hiring process more efficient and devoid of human bias.

Yet, a Financial Times article quoted an employment barrister who doubted the progressive nature of AI tools and said that there is overwhelming evidence available that the machines are very often getting it wrong. A high-profile example of this being the case is when Amazon had to abandon its AI-recruiting tool in 2018 after the company realised it was favouring men for technical jobs.

However, AI has continued to advance at a rapid pace and its adoption by businesses has been further accelerated following COVID-19s arrival. With debates of whether AI can be relied upon to behave impartially still ongoing, how can the technology be governed so organisations continue to act ethically?

During a press conference in Brussels earlier this year, the European Commission said it was preparing to draft regulation for AI that will help prevent its misuse, but the governing body has set itself quite the challenge. The technology is developing constantly so after only a few weeks any regulation that is introduced may not go far enough. After a few months, it could become completely irrelevant.

Within the risk community however, there is no doubt that policies are needed as a study found that 80% of risk professionals are not confident with the AI governance in place. At the same time, there are also concerns from technology leaders who believe tighter regulations will stifle AI innovation and obstruct the potentially enormous advantages it can have on the world.

A certain level of creative and scientific freedom is required for companies to create innovative new technologies and although AI can be used for good, the increasing speed with which it is being developed and adopted across industries is a major consideration for governance. The ethical concerns need to be addressed.

Given the current and ongoing complexities that the global pandemic brings, as well as the looming Brexit deadline, we will likely have to wait for the EUs regulation to be finalised and put in place. In the meantime, businesses should begin to get their own houses in order if they havent already with their use of AI and governance, risk and compliance (GRC) processes to ensure they are not caught out when legislation does arrive.

By setting up a forward-looking risk management program around implementing and managing the use of AI, organisations can improve their ability in handling both existing and emerging risks by analysing past trends, predicting future scenarios, and proactively preparing for further risk. A governance framework should also be implemented around AI both within and outside the organisation to better overcome any unforeseen exposure to risk from evolving AI technologies and an ever-changing business landscape.

Unlike the financial services sector where internal controls and regulators require businesses to regularly validate and manage their own models, AI model controls are already being put in place, reflecting the abundant usage of AI within enterprises. It wont be long before regulators begin to demand proof points of there being the right controls in place, so organisations need to monitor where AI is being used for business decisions and ensure the technology operates with accuracy and is void of inherent biases and incomplete underlying datasets.

When an organisation is operating with such governance and a forward-looking risk management program towards its use of AI, it will certainly be better positioned once new regulation is eventually enforced.

Too often, businesses are operating with multiple information siloes created by different business units and teams in various geographic locations. This can lead to information blind spots and a recent Garter study found that poor data quality is responsible for an average loss of $15 million per year.

Now more than ever, businesses need to be conscious of avoiding unnecessary fines as the figures can be crippling. Hence, it is important that these restrictive siloes are removed in favour of a centralised information hub that everyone across the business can access. This way, senior management and risk professionals are always aware of their risks, including any introduced by AI, and can be confident that they have a clear vision of the bigger picture to be able to efficiently respond to threats.

Another reason for moving towards centralisation and complete visibility throughout the business is that it often gets touted that the reason AI fails to act impartially is that AI systems learn to make decisions based on training data that humans provide. If this data is incomplete or contains conscious or unconscious bias or reflects historical and social inequalities, so will the AI technology.

While an organisation may not always be responsible for creating AI bias in the first place, by having a good oversight and complete centralised information to hand at any time, it becomes a lot easier to see where there are blind spots that could damage a companys reputation.

Ultimately, it is down to organisations themselves to manage their GRC processes, have a clear oversight of the entire risk landscape and strongly protect their reputation. One of the outcomes of the pandemic is the increased laser focus on ethics and integrity, so it is critical that organisations hold these values at the core of their business model to prevent scrutiny from regulators, stakeholders and consumers. Until adequate regulation is introduced by the EU, companies essentially need to take AI governance into their own hands to mitigate any risk and to always perform with integrity.

Editor's Recommended Articles

See original here:
How do we govern artificial intelligence and act ethically? - Open Access Government

Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage – PRNewswire

To do this, Admiral Seguros is using an AI solution, developed by the technology company Tractable, which accurately evaluates vehicle damage with photos sent through a web application. The app, via the AI, completes the complex manual tasks that an advisor would normally perform and produces a damage assessment in seconds, often without the need for further review.

Upon receiving the assessment, Admiral Seguros will use it to make immediate payment offers to policyholders when appropriate, allowing them to resolve claims in minutes, even on the first call.

Jose Maria Perez de Vargas, Head of Customer Management at Admiral Seguros, said: "Admiral Seguros continues to advance in digitalisation as a means to provide a better service to our policyholders, providing them with an easy, secure and transparent means of evaluating damages without the need for travel, achieving compensation in a few hours. It's a simple, innovative and efficient claims management process that our clients will surely appreciate."

Adrien Cohen, co-founder and president of Tractable, said: "By using our AI to offer immediate payments, Admiral Seguros will resolve many claims almost instantly, to the delight of its customers. This is central to our mission of using Artificial Intelligence to accelerate recovery, converting the process from weeks to minutes."

Tractable's AI uses deep learning for computer vision, in addition to machine learning techniques. The AI is trained with many millions of photographs of vehicle damage, and the algorithms learn from experience by analyzing a wide variety of different examples. Tractable's technology can be applied globally to any vehicle.

The AI enables insurers to assess car damage, shares recommended repair operations, and guides the claims management process to ensure these are processed and settled as quickly as possible.

According to Admiral Seguros, the application of this technology in the insurance sector will be a great step in digitization and will offer a great improvement in the customer experience of Admiral's insurance brands in Spain, Qualitas Auto and Balumba.

About Tractable:

Tractable develops artificial intelligence for accident and disaster recovery. Its AI solutions have been deployed by leading insurers across Europe, North America and Asia to accelerate accident recovery for hundreds of thousands of households. Tractable is backed by $55m in venture capital and has offices in London, New York City and Tokyo.

About Admiral Seguros

In Spain, Admiral Group plc has been based in Seville since 2006 thanks to the creation of Admiral Seguros. More than 700 people work from there and for the entire national territory, cementing and marketing their two commercial brands: Qualitas Auto, and Balumba.

Recognized as the third best company to work for in Spain, the sixth in Europe and the eighteenth in the world by the consultancy Great Place to Work, Admiral Seguros is committed to a corporate culture focused on people.

SOURCE Tractable

https://tractable.ai

Here is the original post:
Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage - PRNewswire

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61…

New York, Sept. 30, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "North America Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Regional Analysis by Diagnostic Tool ; Application ; End User ; Service ; and Country" - https://www.reportlinker.com/p05974389/?utm_source=GNW

The healthcare industry has always been a leader in innovation.The constant mutating of diseases and viruses makes it difficult to stay ahead of the curve.

However, with the help of artificial intelligence and machine learning algorithms, it continues to advance, creating new treatments and helping people live longer and healthier.A study published by The Lancet Digital Health compared the performance of deep learning a form of artificial intelligence (AI) in detecting diseases from medical imaging versus that of healthcare professionals, using a sample of studies carried out between 2012 and 2019.

The study found that, in the past few years, AI has become more precise in identifying disease diagnosis in these images and has become a more feasible source of diagnostic information.With advancements in AI, deep learning may become even more efficient in identifying diagnosis in the coming years.

Moreover, it can help doctors with diagnoses and notify when patients are weakening so that the medical intervention can occur sooner before the patient needs hospitalization. It can save costs for both the hospitals and patients. Additionally, the precision of machine learning can detect diseases such as cancer quickly, thus saving lives.In 2019, the medical imaging toolsegment accounted for a larger share of the North America artificial intelligence in healthcare diagnosis market. Its growth is attributed to an increasing adoption of AI technology for diagnosis of chronic conditions is likely to drive the growth of diagnostic tool segment in the North America artificial intelligence in healthcare diagnosis.In 2019, the radiology segment held a considerable share of the for North America artificial intelligence in healthcare diagnosis market, by the application. This segment is also predicted to dominate the market by2027 owing to rising demand for AI based application for radiology.A few major primary and secondary sources for the artificial intelligence in healthcare diagnosis market included US Food and Drug Administration, and World Health Organization.Read the full report: https://www.reportlinker.com/p05974389/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Continued here:
The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61...