Artificial Intelligence Can’t Deal With Chaos, But Teaching It Physics Could Help – ScienceAlert

While artificial intelligence systems continue to make huge strides forward, they're still not particularly good at dealing with chaos or unpredictability. Now researchers think they have found a way to fix this, by teaching AI about physics.

To be more specific, teaching them about the Hamiltonian function, which gives the AI information about the entirety of a dynamic system: all the energy contained within it, both kinetic and potential.

Neural networks, designed to loosely mimic the human brain as a complex, carefully weighted type of AI, then have a 'bigger picture' view of what's happening, and that could open up possibilities for getting AI to tackle harder and harder problems.

"The Hamiltonian is really the special sauce that gives neural networks the ability to learn order and chaos," says physicist John Lindner, from North Carolina State University.

"With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems."

The researchers compare the introduction of the Hamiltonian function to a swinging pendulum it's giving AI information about how fast the pendulum is swinging and its path of travel, rather than just showing AI a snapshot of the pendulum at one point in time.

If neural networks understand the Hamiltonian flow so where the pendulum is, in this analogy, where it might be going, and the energy it has then they are better able to manage the introduction of chaos into order, the new study found.

Not only that, but they can also be built to be more efficient: better able to forecast dynamic, unpredictable outcomes without huge numbers of extra neural nodes. It helps AI to quickly get a more complete understanding of how the world actually works.

A representation of the Hamiltonian flow, with rainbow colours coding a fourth dimension. (North Carolina State University)

To test their newly improved AI neural network, the researchers put it up against a commonly used benchmark called the Hnon-Heiles model, initially created to model the movement of a star around a sun.

The Hamiltonian neural network successfully passed the test, correctly predicting the dynamics of the system in states of order and of chaos.

This improved AI could be used in all kinds of areas, from diagnosing medical conditions to piloting autonomous drones.

We've already seen AI simulate space, diagnose medical problems, upgrade movies and develop new drugs, and the technology is, relatively speaking, just getting started there's lots more on the way. These new findings should help with that.

"If chaos is a nonlinear 'super power', enabling deterministic dynamics to be practically unpredictable, then the Hamiltonian is a neural network 'secret sauce', a special ingredient that enables learning and forecasting order and chaos," write the researchers in their published paper.

The research has been published in Physical Review E.

Read more:
Artificial Intelligence Can't Deal With Chaos, But Teaching It Physics Could Help - ScienceAlert

Here Is How The United States Should Regulate Artificial Intelligence – Forbes

The U.S. Congress should create a federal agency for artificial intelligence. Photographer: Rich ... [+] Clement/Bloomberg.

In 1906, in response to shocking reports about the disgusting conditions in U.S. meat-packing facilities, Congress created the Food and Drug Administration (FDA) to ensure safe and sanitary food production.

In 1934, in the wake of the worst stock market crash in U.S. history, Congress created the Securities and Exchange Commission (SEC) to regulate capital markets.

In 1970, as the nation became increasingly alarmed about the deterioration of the natural environment, Congress created the Environmental Protection Agency (EPA) to ensure cleaner skies and waters.

When an entire field begins to create a broad set of challenges for the public, demanding thoughtful regulation, a proven governmental approach is to create a federal agency focused specifically on engaging with and managing that field.

The time has come to create a federal agency for artificial intelligence.

Across the AI community, there is growing consensus that regulatory action of some sort is essential as AIs impact spreads. From deepfakes to facial recognition, from autonomous vehicles to algorithmic bias, AI presents a large and growing number of issues that the private sector alone cannot resolve.

In the words of Alphabet CEO Sundar Pichai: There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.

Yet there have been precious few concrete proposals as to what this should look like.

The best way to flexibly, thoroughly, and knowledgeably regulate artificial intelligence is through the creation of a dedicated federal agency.

Though many Americans do not realize it, the primary manner in which the federal government enacts public policy today is not Congress passing a law, nor the President issuing an executive order, nor a judge making a ruling in a court case. Instead, it is federal agencies like the FDA, SEC or EPA implementing rules and regulations.

Though barely contemplated by the framers of the U.S. Constitution, federal agenciescollectively referred to as the administrative statehave in recent decades come to assume a dominant role in the day-to-day functioning of the U.S. government.

There are good reasons for this. Federal agencies are staffed by thousands of policymakers and subject matter experts who focus full-time on the fields they are tasked with regulating. Agencies can move more quickly, get deeper into the weeds, and adjust their policies more flexibly than can Congress.

Imagine if, every time a pharmaceutical company sought government approval for a new drug, or every time a given air pollutants parts-per-million concentration guidelines needed to be revised, Congress had to familiarize itself with all of the relevant technical details and then pass a law on the topic. Government would grind to a halt.

Like pharmaceutical drugs and environmental science, artificial intelligence is a deeply technical and rapidly evolving field. It demands a specialized, technocratic, detail-oriented regulatory approach. Congress cannot and should not be expected to respond directly with legislation whenever government action in AI is called for. The best way to ensure thoughtful, well-crafted AI policy is through the creation of a federal agency for AI.

How would such an agency work?

One important principle is that the agency should craft its rules on a narrow, sector-by-sector basis rather than as one-size-fits-all mandates. As R. David Edelman aptly argued, AI is a tool with various applications, not a thing in itself.

Rather than issuing overbroad regulations about, say, explainability or data privacy to which any application of AI must adhere, policymakers should identify concrete AI use cases that merit novel regulatory action and develop domain-specific rules to address them.

Stanford Universitys One Hundred Year Study on AI made this point well: Attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isnt any one thing), and the risks and considerations are very different in different domains. Instead, policymakers should recognize that to varying degrees and over time, various industries will need distinct, appropriate, regulations that touch on software built using AI or incorporating AI in some way.

This new federal agency would need to work closely with other agencies, as there will be extensive overlap between its mandate and the work of other regulatory bodies.

For instance, in crafting policies about the admissible uses of machine learning algorithms in criminal sentencing and parole decisions, the agency would collaborate closely with the Department of Justice, lending its subject matter expertise to ensure that the regulations are realistically designed.

Similarly, the agency might work in tandem with the Treasury Department and the CFPB to create rules about the proper use of AI in banks loan underwriting decisions. Such cross-agency collaboration is the norm in Washington today.

There are numerous additional areas in which smart, well-designed AI policy is already needed: autonomous weapons, facial recognition, social media content curation, and adversarial attacks on neural networks, to name just a few.

As AI technology continues its breathtaking advance in the years ahead, it will create innumerable benefits and opportunities for us all. It will also generate a host of new challenges for society, many of which we cannot yet even imagine. A federal agency dedicated to artificial intelligence will best enable the U.S. to develop effective public policy for AI, protecting the public while positioning the nation to capitalize on what will be one of the most important forces of the twenty-first century.

See more here:
Here Is How The United States Should Regulate Artificial Intelligence - Forbes

Global Artificial Intelligence (AI) in Education Market Growth (Status and Outlook) 2020-2026 – Cole of Duty

A research report on the Global Artificial Intelligence (AI) in Education Market delivers complete analysis regarding the size, trends, market share, and growth prospects. In addition, the report includes market volume with an exact opinion offered in the report. This research report assesses the market growth rate and the industry value depending on the growth such as driving factors, market dynamics, and other associated data. The information provided in this report is integrated based on the trends, latest industry news, as well as opportunities. The Artificial Intelligence (AI) in Education market report is major compilation of major information with respect to the overall competitor data of this market. Likewise, the information is an inclusive of the number of regions where the global Artificial Intelligence (AI) in Education industry has fruitfully gained the position. This research report delivers a broad assessment of the Artificial Intelligence (AI) in Education market. The global Artificial Intelligence (AI) in Education market report is prepared with the detailed verifiable projections, and historical data about the Artificial Intelligence (AI) in Education market size.

Request a sample of this report @ https://www.orbisresearch.com/contacts/request-sample/4499322

Moreover, the report also includes a full market analysis and supplier landscape with the help of PESTEL and SWOT analysis of the leading service providers. In addition, the projections offered in this report have been derived with the help of proven research assumptions as well as methodologies. By doing so, the Artificial Intelligence (AI) in Education research study offers collection of information and analysis for each facet of the Artificial Intelligence (AI) in Education industry such as technology, regional markets, applications, and types. The report has been made through the primary research interviews, complete surveys, as well as observations, and secondary research. Likewise, the Artificial Intelligence (AI) in Education market report delivers major illustrations and presentations about the market which integrates graphs, pie charts, and charts and offers the precise percentage of the different strategies implemented by the major providers in the global Artificial Intelligence (AI) in Education market. This report delivers a separate analysis of the foremost trends in the accessible market, regulations and mandates, micro & macroeconomic indicators are also included in this report.

Top Players:

GoogleIBMPearsonMicrosoftAWSNuanceCognizantMetacogQuantum Adaptive LearningQueriumThird Space LearningAleksBlackboardBridgeUCarnegie LearningCenturyCogniiDreamBox LearningElemental PathFishtreeJellynoteJenzabarKnewtonLuilishuo

Browse the complete report @ https://www.orbisresearch.com/reports/index/global-artificial-intelligence-ai-in-education-market-size-status-and-forecast-2020-2026

By doing so, the study forecast the attractiveness of each major segment over the prediction period. The global Artificial Intelligence (AI) in Education market study extensively features a complete quantitative and qualitative evaluation by studying data collected from various market experts and industry participants in the market value chain. The report also integrates the various market conditions around the globe such as pricing structure, product profit, demand, supply, production, capacity, as well as market growth structure. In addition, this study provides important data about the investment return data, SWOT analysis, and investment feasibility analysis.

Types:

Machine Learning and Deep LearningNatural Language Processing

Applications:

Virtual Facilitators and Learning EnvironmentsIntelligent Tutoring SystemsContent Delivery SystemsFraud and Risk Management

In addition, the number of business tactics aids the Artificial Intelligence (AI) in Education market players to give competition to the other players in the market while recognizing the significant growth prospects. Likewise, the research report includes significant information regarding the market segmentation which is designed by primary and secondary research techniques. It also offers a complete data analysis about the current trends which have developed and are expected to become one of the strongest Artificial Intelligence (AI) in Education market forces into coming future. In addition to this, the Artificial Intelligence (AI) in Education report provides the extensive analysis of the market restraints that are responsible for hampering the Artificial Intelligence (AI) in Education market growth along with the report also offers a comprehensive description of each and every aspects and its influence on the keyword market.

If enquiry before buying this report @ https://www.orbisresearch.com/contacts/enquiry-before-buying/4499322

About Us :

Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us :

Read more:
Global Artificial Intelligence (AI) in Education Market Growth (Status and Outlook) 2020-2026 - Cole of Duty

Artificial intelligence helping NASA design the new Artemis moon suit – SYFY WIRE

Last fall, NASA unveiled the new suits that Artemis astronauts will wear when they take humanitys first steps on the lunar surface for the first time since way back in 1972. The look of theA7LB pressure suit variants that accompanied those earlierastronauts to the Moon, and later to Skylab, has since gone on to signify for many the definitive, iconic symbol of humanitys most ambitiously-realized space dreams.

With Artemis 2024 launch target approaching, NASAs original Moon suit could soon be supplanted in the minds of a new generation of space dreamers with the xEMU, the first ground-up suit made for exploring the lunar landscape since Apollo 17s Eugene Cernan and Harrison Schmitt took humanitys last Moon walk (to date). Unlike those suits, the xEMUs design is getting an assist from a source of "brain" power that simply wasnt available back then: artificial intelligence.

Specifically, AI is reportedly crunching numbers behind the scenes to help engineer support components for the new, more versatile life support system thatll be equipped to the xEMU (Extravehicular Mobility Unit) suit. WIRED reports that NASA is using AI to assist the new suits life support system in carrying out its more vital functions while streamlining its weight, component size, and tolerances for load-bearing pressure, temperature, and the other physical demands that a trip to the Moon (and back) imposes.

Recruiting AI isnt just about speed though speed is definitely one of the perks to meeting NASAs ambitious 2024 timeline and all that lies beyond. The machines iterative process is 100 or 1,000 times more than we could do on our own, and it comes up with a solution that is ideally optimized within our constraints, Jesse Craft, a senior design engineer at a Texas-based contractor working on the upgraded version of the xEMU suit, told WIRED.

But in some instances, AI even raises the bar for quality, as Craft also noted. Were using AI to inspire design, he explained. We have biases for right angles, flat surfaces, and round dimensions things youd expect from human design. But AI challenges your biases and allows you to see new solutions you didnt see before.

So far, NASA is relying on AI only to design physical brackets and supports for the life support system itself in other words, not the kind of stuff that might spell life or death in the event of failure. But that approach is already paying off by cutting mass without sacrificing strength, yielding component weight reductions of up to 50 percent, according to the report.

Even at 1/6 the gravity that astronauts experience back on Earth, that kind of small weight savings here and there can add up to make a big difference on the Moon. And even a slight slimming down cant hurt the xEMUs chances at perhaps becoming a new standard bearer in space fashion, as Artemis captivates a new generation with its sights set on the stars.

View original post here:
Artificial intelligence helping NASA design the new Artemis moon suit - SYFY WIRE

Artificial Intelligence (AI): 8 habits of successful teams – The Enterprisers Project

The adoption ofartificial intelligence(AI) in the enterprise continues: More than half (58 percent) of respondents to McKinsey & Companys recentglobal AI surveysay their organizations have embedded at least one AI capability into a process or product in at least one function or business unit, up from 47 percent in 2018. Those increases were reported across all industries. Whats more, nearly a third (30 percent) are using AI in products or processes across multiple business units and functions, McKinseys data says.

But, as the McKinsey research and others point out, some organizations are much further along in scaling their AI initiatives.

[ Do you understandthe main types of AI?Read also:5 artificial intelligence (AI) types, defined.]

What are teams succeeding with AI doing that others can emulate to propel their efforts? Here are 8 habits to consider:

65 percent of AIhigh performers report having a clear data strategy, McKinsey data says.

The organizations that McKinsey identified as AI high performers were deliberate about their plans to scale AI and were more likely to have addressed key issues like business alignment and data. Nearly three quarters (72 percent) of respondents from AI high performers said their companys AI strategy aligns with their corporate strategy, compared with 29 percent of respondents from other companies. Similarly, 65 percent from the high performers report having a clear data strategy that supports and enables AI, compared with only 20 percent from other companies.

[ Get our quick-scan primer on 10 key artificial intelligence terms for IT and business leaders:Cheat sheet: AI glossary.]

Being successful with AI programs requires that organizations create working teams with representation from multiple disciplines, says Seth Earley, CEO ofEarley Information Scienceand author ofThe AI-Powered Enterprise.

The particular mix of skills required will vary based on the flavor of AI.

Vodafone, for example, tried to build their AI capability by looking for cognitive engineers. The problem is that cognitive engineer is a new job role and there were none on the market, says Earley. Instead, they built their own by assembling a team consisting of data scientists and programmers (obviously), but also linguists, information architects, user experience experts, and subject matter experts from the business.

The particular mix of skills required will vary based on the flavor of AI. Predictive analytics would not likely require a linguist, for example, Earley notes.

Think of as many business use cases for an AI solution as possible.

Companies looking to implement AI-enabled solutions need to ensure they arent being limited by their own creativity, says Dan Simion, vice president of AI and analytics atCapgemini. He advises AI teams to think of as many business use cases for a solution as possible. While there may be examples of AI-enabled use cases that organizations have implemented previously, there are likely additional cases that have never been thought of. If aligned properly with unique business needs, they could immediately solve an organizations burning issues, Simion says.

Casting a wide net of use cases can determine how far the new AI-enabled solution might go and help the organization identify which use cases are going to offer the quickest payback. If sequenced correctly, the initial use cases can bring immediate ROI, helping to self-fund future use cases within the program as it progresses, says Simion.

Successful AI projects model what users actually need and determine this through actual working sessions with users, observations, and process mapping, Earley explains. These need to be specific and testable.

AI systems built based on generic use cases like personalizing the customer experience will not be testable unless they specify the details of the user, the scenario, and exactly what personalized content and a personalized experience looks like, says Early.

Lets look at four more best practices:

Read more:
Artificial Intelligence (AI): 8 habits of successful teams - The Enterprisers Project

Artificial Intelligence in Healthcare Market with COVID-19 Impact Analysis by Offering, Technology, End-Use Application, End-user and Region – Global…

Dublin, June 25, 2020 (GLOBE NEWSWIRE) -- The "Artificial Intelligence in Healthcare Market with Covid-19 Impact Analysis by Offering (Hardware, Software, Services), Technology (Machine Learning, NLP, Context-Aware Computing, Computer Vision), End-Use Application, End User and Region - Global Forecast to 2026" report has been added to ResearchAndMarkets.com's offering.

The AI in healthcare market is expected to be valued at USD 4.9 billion in 2020 and is likely to reach USD 45.2 billion by 2026; it is projected to grow at a CAGR of 44.9% during the forecast period.

The major factors driving the growth of the market are the increasing volume of healthcare data and growing complexities of datasets, the intensifying need to reduce towering healthcare costs, improving computing power and declining hardware costs, a growing number of cross-industry partnerships and collaborations, and rising imbalance between health workforce and patients driving the need for improvised healthcare services.

Another major driving factor fueling the market growth currently is the adoption of this technology by multiple pharmaceutical and biotechnology companies across the world to expedite vaccine or drug development processes for COVID-19. The major restraint for the market is the reluctance among medical practitioners to adopt AI-based technologies and lack of a skilled workforce.

MPU processor segment expected to hold the largest share in AI in healthcare in 2020

An MPU contains all or most of the CPU functions and is the engine that goes into motion when the computer is on. A microprocessor is specially designed to perform arithmetic and logic operations that use small number-holding areas called registers. Typical microprocessor operations include adding, subtracting, comparing two numbers, and fetching numbers. These operations are the result of a set of instructions that are part of the microprocessor design.

AI in the healthcare market for machine learning projected to grow at the highest CAGR during the forecast period

Growing adoption of deep learning in various healthcare applications, especially in the areas of medical imaging, disease diagnostics, and drug discovery, and the use of different sensors and devices to track a patient's health status in real-time are supplementing the growth of the market.

Patient data & risk analysis segment to capture the largest share of AI in the healthcare market

The growth of the patient data & risk analysis segment is attributed to the increasing adoption of EMRs and various advantages offered by AI systems to healthcare service providers, patients, pharmaceuticals companies, and payers.

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary3.1 Covid-19 Impact Analysis: AI in Healthcare Market3.1.1 Pre-COVID-19 Scenario3.1.2 Realistic Scenario3.1.3 Optimistic Scenario3.1.4 Pessimistic Scenario

4 Premium Insights4.1 Attractive Opportunities in AI in the Healthcare Market4.2 AI in Healthcare Market, by Offering4.3 AI in Healthcare Market, by Technology4.4 Europe: AI in Healthcare Market, by End-user and Country4.5 AI in Healthcare Market, by Country

5 Market Overview5.1 Introduction5.2 Market Dynamics5.2.1 Drivers5.2.1.1 Influx of Large and Complex Healthcare Datasets5.2.1.2 Growing Need to Reduce Healthcare Costs5.2.1.3 Improving Computing Power and Declining Hardware Cost5.2.1.4 Growing Number of Cross-Industry Partnerships and Collaborations5.2.1.5 Rising Need for Improvised Healthcare Services Due to Imbalance Between Health Workforce and Patients5.2.2 Restraints5.2.2.1 Reluctance Among Medical Practitioners to Adopt AI-Based Technologies5.2.2.2 Lack of Skilled AI Workforce and Ambiguous Regulatory Guidelines for Medical Software5.2.3 Opportunities5.2.3.1 Growing Potential of AI-Based Tools for Elderly Care5.2.3.2 Increasing Focus on Developing Human-Aware AI Systems5.2.3.3 Growing Potential of AI-Technology in Genomics, Drug Discovery, and Imaging & Diagnostics to Fight Covid-195.2.4 Challenges5.2.4.1 Lack of Curated Healthcare Data5.2.4.2 Concerns Regarding Data Privacy5.2.4.3 Lack of Interoperability Between AI Solutions Offered by Different Vendors5.3 Value Chain Analysis5.4 Case Studies5.4.1 Mayo Clinic'S Center for Individualized Medicine Collaborated With Tempus to Personalize Cancer Treatment5.4.2 Microsoft Collaborated With Cleveland Clinic to Identify Potential At-Risk Patients Under Icu Care5.4.3 Nvidia and Massachusetts General Hospital Partnered to Use Artificial Intelligence for Advanced Radiology, Pathology, and Genomics5.4.4 Microsoft Partnered With Weil Cornell Medicine to Develop AI-Powered Chatbot5.4.5 Partners Healthcare and GE Healthcare Entered into 10-Year Collaboration for Integrating AI Across Continuum of Care5.4.6 Ultronics, Zebra Medical Vision, Ai2 Incubator, and Fujifilm Sonosite Are Using AI Platform for Enhancing Medical Imaging Analysis5.4.7 Numedii, 4Quant, and Desktop Genetics to Use AI for Research and Development5.4.8 Nuance Launched Dragon Medical Virtual Assistant5.4.9 GE Healthcare Launched Command Center for Emergency Rooms and Surgeries5.4.10 AIserve Offers AI Wearable for Blind and Partially Sighted5.5 Impact of Covid-19 on AI in Healthcare Market

6 Artificial Intelligence in Healthcare Market, by Offering6.1 Introduction6.2 Hardware6.2.1 Processor6.2.1.1 Mpu6.2.1.2 GPU6.2.1.3 Fpga6.2.1.4 Asic6.2.2 Memory6.2.2.1 High-Bandwidth Memory is Being Developed and Deployed for AI Applications, Independent of Its Computing Architecture6.2.3 Network6.2.3.1 Nvidia (US) and Intel (US) Are Key Providers of Network Interconnect Adapters for AI Applications6.3 Software6.3.1 AI Solutions6.3.1.1 On-Premises6.3.1.1.1 Data-Sensitive Enterprises Prefer Advanced On-Premises Nlp and Ml Tools for Use in AI Solutions6.3.1.2 Cloud6.3.1.2.1 Cloud Provides Additional Flexibility for Business Operations and Real-Time Deployment Ease to Companies That Are Implementing Real-Time Analytics6.3.2 AI Platform6.3.2.1 Machine Learning Framework6.3.2.1.1 Major Tech Companies Such as Google, IBM, and Microsoft Are Developing and Offering Ml Frameworks6.3.2.2 Application Program Interface (API)6.3.2.2.1 Apis Are Used During Programming of Graphical User Interface (Gui) Components6.4 Services6.4.1 Deployment & Integration6.4.1.1 Need for Deployment and Integration Services for AI Hardware and Software Solutions is Supplementing Growth of this Segment6.4.2 Support & Maintenance6.4.2.1 Maintenance Services Are Required to Keep the Performance of Systems at An Acceptable Standard

7 Artificial Intelligence in Healthcare Market, by Technology7.1 Introduction7.2 Machine Learning7.2.1 Deep Learning7.2.1.1 Deep Learning Enables Machines to Build Hierarchical Representations7.2.2 Supervised Learning7.2.2.1 Classification and Regression Are Major Segments of Supervised Learning7.2.3 Reinforcement Learning7.2.3.1 Reinforcement Learning Allows Systems and Software to Determine Ideal Behavior for Maximizing Performance of Systems7.2.4 Unsupervised Learning7.2.4.1 Unsupervised Learning Includes Clustering Methods Consisting of Algorithms With Unlabeled Training Data7.2.5 Others7.3 Natural Language Processing7.3.1 Nlp is Widely Used by Clinical and Research Community in Healthcare7.4 Context-Aware Computing7.4.1 Development of More Sophisticated Hard and Soft Sensors Has Accelerated Growth of Context-Aware Computing7.5 Computer Vision7.5.1 Computer Vision Technology Has Shown Significant Applications in Surgery and Therapy

8 Artificial Intelligence in Healthcare Market, by End-Use Application8.1 Introduction8.2 Patient Data and Risk Analysis8.3 Inpatient Care & Hospital Management8.4 Medical Imaging & Diagnostics8.5 Lifestyle Management & Remote Patient Monitoring8.6 Virtual Assistants8.7 Drug Discovery8.8 Research8.9 Healthcare Assistance Robots8.10 Precision Medicine8.11 Emergency Room & Surgery8.12 Wearables8.13 Mental Health8.14 Cybersecurity

9 Artificial Intelligence in Healthcare Market, by End-user9.1 Introduction9.2 Hospitals and Healthcare Providers9.3 Patients9.4 Pharmaceuticals & Biotechnology Companies9.5 Healthcare Payers9.6 Others

10 Artificial Intelligence in Healthcare Market, by Region10.1 Introduction10.2 North America10.3 Europe10.4 Asia-Pacific10.5 Rest of the World

11 Competitive Landscape11.1 Overview11.2 Ranking of Players, 201911.3 Competitive Leadership Mapping11.3.1 Visionary Leaders11.3.2 Dynamic Differentiators11.3.3 Innovators11.3.4 Emerging Companies11.4 Competitive Scenario11.4.1 Product Developments and Launches11.4.2 Collaborations, Partnerships, and Strategic Alliances11.4.3 Acquisitions & Joint Ventures

12 Company Profiles12.1 Key Players12.1.1 Nvidia12.1.2 Intel12.1.3 IBM12.1.4 Google12.1.5 Microsoft12.1.6 General Electric (Ge) Company12.1.7 Siemens Healthineers (A Strategic Unit of Siemens Group)12.1.8 Medtronic12.1.9 Micron Technology12.1.10 Amazon Web Services (Aws)12.2 Right to Win12.3 Other Major Companies12.3.1 Johnson & Johnson Services12.3.2 Koninklijke Philips12.3.3 General Vision12.4 Company Profiles, by Application12.4.1 Patient Data & Risk Analysis12.4.1.1 Cloudmedx12.4.1.2 Oncora Medical12.4.1.3 Anju Life Sciences Software12.4.1.4 Careskore12.4.1.5 Linguamatics12.4.2 Medical Imaging & Diagnostics12.4.2.1 Enlitic12.4.2.2 Lunit12.4.2.3 Curemetrix12.4.2.4 Qure.AI12.4.2.5 Contextvision12.4.2.6 Caption Health12.4.2.7 Butterfly Networks12.4.2.8 Imagia Cybernetics12.4.3 Precision Medicine12.4.3.1 Precision Health AI12.4.3.2 Cota12.4.3.3 FDNA12.4.4 Drug Discovery12.4.4.1 Recursion Pharmaceuticals12.4.4.2 Atomwise12.4.4.3 Deep Genomics12.4.4.4 Cloud Pharmaceuticals12.4.5 Lifestyle Management & Monitoring12.4.5.1 Welltok12.4.5.2 Vitagene12.4.5.3 Lucina Health12.4.6 Virtual Assistants12.4.6.1 Next It (A Verint Systems Company)12.4.6.2 Babylon12.4.6.3 MDLIVE12.4.7 Wearables12.4.7.1 Magnea12.4.7.2 Physiq12.4.7.3 Cyrcadia Health12.4.8 Emergency Room & Surgery12.4.8.1 Caresyntax12.4.8.2 Gauss Surgical12.4.8.3 Perceive 3D12.4.8.4 Maxq AI12.4.9 Inpatient Care & Hospital Management12.4.9.1 Qventus12.4.9.2 Workfusion12.4.10 Research12.4.10.1 Icarbonx12.4.10.2 Desktop Genetics12.4.11 Cybersecurity12.4.11.1 Darktrace12.4.11.2 Cylance12.4.11.3 LexisNexis Risk Solutions12.4.11.4 Securonix12.4.12 Mental Health12.4.12.1 Ginger.Io12.4.12.2 X2Ai12.4.12.3 Biobeats12.4.13 Healthcare Assistance Robots12.4.13.1 Pillo12.4.13.2 Catalia Health

Story continues

For more information about this report visit https://www.researchandmarkets.com/r/hzronp

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

CONTACT: ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.comFor E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

Link:
Artificial Intelligence in Healthcare Market with COVID-19 Impact Analysis by Offering, Technology, End-Use Application, End-user and Region - Global...

73% of Retailers Believe Artificial Intelligence Can Add Significant Value to Their Demand Forecasting – Yahoo Finance

Study reveals clear variation in business performance for retailers who have implemented new analytics technologies

LLamasoft, the leading provider of AI-powered supply chain analytics software to 750 of the world's leading brands, today published the results of a global retail supply chain study which revealed that 73% of retailers believe AI and Machine Learning can add significant value to their demand forecasting processes, and over half say it will improve 8 other critical supply chain capabilities.

The research also found that while 56% of overperforming retailers, also known as retail winners*, use technology to model contingency plans for severe supply chain interruptions, a mere 31% of retailers who are not overperforming do the same. Overall, 56% of retailers surveyed are struggling with the ability to respond to rapid shifts and the lack of flexibility has cost them during the disruptions such as COVID-19, with many seeing a huge drop in revenue as a result.

In addition, 73% of retail winners have the foresight and ability to monitor capacity, which allows them to prepare for sudden shifts in demand and supply, compared to 35% of other or under-performing retailers. This is a clear indication that retail winners are outmaneuvering the competition by predicting and preparing for the future. However, without the ability to adapt to these sudden spikes and troughs with contingency planning, the forecasting would be of little use. Therefore, the two must be married together to produce a retail winner.

COVID-19 has further illustrated that, moving forward, retailers must rapidly adjust to the never normal world we find ourselves in and they must act to consistently enable faster responses to succeed. There will always be market variations and disruptions, meaning that retailers must be able to forecast for these changes and adapt quickly. Ultimately, this is nothing new. While COVID-19 has accelerated certain changes, such as the move to e-commerce, retail habits were already shifting and the need to adapt was a pressing concern.

The study found the following when looking at what retail winners are doing to overachieve:

Rather than implementing newer AI and analytic technologies which enable organizations to better prepare for the future, underperforming retailers are struggling to move away from strategies designed to find the lowest-cost point of manufacture on a product-by-product basis. The contrast is clear: those who can prepare for the unexpected win, while those unable to adapt falter.

While some retailers are ahead in terms of their technical ability, the study shows the retail industry as a whole still has much room for improvement. For example, more than 50% of all retailers surveyed said their current systems were causing a big or somewhat of a problem in all 10 supply chain capabilities presented to them, yet 13% of retailers have not even planned to invest in technology.

"In the shadow of COVID-19, without a vaccine or successful treatment, shoppers will tire of hunkering down at home and start to visit stores as they re-open across the globe in phases, but in far different ways (and in far fewer numbers) than pre-outbreak. So, retailers are in a 'new never normal' environment," said Brian Kilcourse, Managing Partner, RSR. "With such unpredictability, the ability to be agile and model potential outcomes becomes even more important. Retailers need AI-enabled predictive models for things such as labor and transportation costs across the supply chain or finding optimal DC-to-customer locations to lower costs while still satisfying rapidly changing customer needs. AI isnt even the future anymore; it is already here."

With retail habits changing, a process accelerated by the impacts of COVID-19, the current winners in retail are prepared to overachieve once more. Shopping behaviors are rapidly shifting to that of e-commerce, a change which will undoubtably contribute to fluctuations in demand and supply. Retailers with the technology to forecast these changes, model contingency plans and options, and quickly adapt their supply chain strategy to meet new demand and avoid excess supply will win. Those that cannot, risk being left behind with below-target sales figures and losses incurred from waste.

Story continues

"Retailers and other businesses across the world should now embrace that we are in a never normal world. Being prepared for uncertainty, such as continued disruptions from COVID-19, Brexit, trade wars, new market entrants or changing customer preferences must be part of company core competencies," said Sandra Moran, Chief Marketing Officer of LLamasoft. "However, accepting this is not enough: retailers must act to prepare for the unprecedented. This research demonstrates there are clear performance variations between retail winners who have leveraged predictive technologies and enterprise decision platforms to deliver faster and smarter responses to disruptions and new opportunities, and those that have not."

Webinar

Join LLamasoft and RSR for a live webinar on June 30, 2020 at 11:00am EDT titled, "The Case for an AI-Enabled Supply Chain" a deep dive into the study and the results.

Register here: https://llamasoft.com/retail-webinar-with-rsr/

Methodology

For this research, conducted online by Retail Systems Research (RSR) between February 2020 and March 2020, senior figures within the retail industry were targeted, with answers coming from 82 retail executives. Download and read the full report with RSRs recommendations: https://llamasoft.com/retail-benchmark-report/

*In RSR benchmark reports, RSR frequently cites the differences between over-performers in year over-year comparable sales and their competitors. RSR finds that consistent sales performance is an outcome of a differentiating set of thought processes, strategies and tactics. They call comparable sales over-performers "Retail Winners." RSRs definition of these Winners is, assuming industry average comparable store/channel sales growth of 4.5 percent, they define those with sales above this hurdle as "Winners," those at this sales growth rate as "average," and those below this sales growth rate as "laggards," "also-rans," or "all others."

About RSR Research

Retail Systems Research ("RSR") is the only research company run by retailers for the retail industry. RSR provides insight into business and technology challenges facing the extended retail industry and thought leadership and advice on navigating these challenges for specific companies and the industry at large. To learn more about RSR, visit http://www.rsrresearch.com.

About LLamasoft, Inc.

LLamasoft delivers the science behind supply chains biggest decisions. Over 750 of the worlds most innovative companies rely on LLamasoft to design operational strategies to achieve profitability and growth goals. Powered by AI and advanced analytics, LLamasofts decision platform enables business leaders solve problems in new ways and make smarter decisions faster as their business and operating models change. With a true digital twin of the extended supply chain, LLamasoft deploys decision solutions through enterprise ready applications and an extensible no-code App Studio that enables LLamasoft or its customers to rapidly build their own business applications. Its customers have identified more than $16B in value leveraging insights from LLamasofts solutions. And to reach its goal to positively impact 100 million lives by 2022, LLamasoft partners with humanitarian organizations, government entities and the World Economic Forum to design and optimize health supply chains.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200625005480/en/

Contacts

LLamasoft, Inc.Lisa Hajralisa.hajra@llamasoft.com

Follow this link:
73% of Retailers Believe Artificial Intelligence Can Add Significant Value to Their Demand Forecasting - Yahoo Finance

A researcher from Salinas is using artificial intelligence to make college admissions more equitable. – Monterey County Weekly

A little over a week ago, AJ Alvero was thinking about Confederate monuments.

Specifically, he read that a monument honoring Confederate General Robert Selden Garnett was removed from the lawn in front of Colton Hall in Monterey. And then he read the monument was replaced with a plaque that still named Garnett as the designer of the state seal of California, but left out his Confederate legacy. Not good enough, Alvero thought.

A few days later, someone tore out the new plaque and left a sign behind, saying Celebrate real heroes. No place of honor for racists.

Alvero, a doctoral student at Stanford University, says hes not the one who did it. I was very strongly toying with the idea but someone beat me to punch, he says.

It wouldnt be the first time that Alvero acted to strip a Confederate name from a public space.

Growing up in Salinas, Alvero often crossed an intersection that was officially known as Confederate Corners. In the wake of the deadly neo-Nazi rally in Charlottesville, Virginia in 2017, he organized a community effort to change the name of the intersection, enlisting the support of the Monterey County Board of Supervisors. He wanted to call it Campesino Corners to honor the areas farmworkers. The board selected the name Springtown.

The intersection is unremarkable in appearance, and the fact it has a name is not very widely known which is why Alvero thought that renaming would be low-hanging fruit in the effort to undo American racism. But he was wrong and did not anticipate the backlash and vitriol against him.

Now, a few years removed, Alvero is still focused on the power of words and language in our public life. But hes leading a more sophisticated and systemic charge on bias. His target is college admissions and his instrument of change is artificial intelligence.

He recently published a groundbreaking peer-reviewed study that argues its possible to combat bias in the admissions process by analyzing the language used in application essays to detect demographic patterns.

Days before the killing of George Floyd on May 25, which triggered a national reckoning on racism, the University of California took a giant step to address stark disparities in college admissions. By a 23-0 vote, the university systems governing board decided to phase out the use of the SAT and ACT in the admissions process because evidence shows that they drive inequity. A few weeks later, the board voted, unanimously again, to support the restoration of affirmative action in California, which had outlawed the practice in 1996 through Proposition 209.

For university admissions officers, these two decisions increased the focus on evaluating personal essays and circumstances of the hundreds of thousands of applicants they screen each year.

The U.S. Supreme Court, while not exactly endorsing affirmative action, has ruled that consideration of race in admissions is constitutional, as part of a highly individualized, holistic review of each applicants file, giving serious consideration to all the ways an applicant might contribute to a diverse educational environment.

A recent and closely watched lawsuit against Harvard University challenged the use of race as a factor in admissions, claiming the university discriminates against Asian American students. Ultimately, a federal judge, Allison Burroughs, rejected the lawsuit.

In her decision, Burroughs wrote that Harvards process of weighing test scores alongside subjective personal essays survives strict scrutiny.

But, she added, the process could be improved: admissions officers should receive training on recognizing implicit bias. Statistical analysis should be used to discover and counter race-related disparities.

The decision enshrined the continued consideration of race while also raising the bar on what admissions officers must do to achieve fairness. Heres where Alveros research comes in.

Alvero, who studies education, sociology, language and data science, teamed up with other Stanford graduate students to explore how a more equitable future for college admissions might be achieved.

In other words, if SAT scores become obsolete, and personal essays become more central, how can the selection process be improved to survive new constitutional challenges?

Like most scholarly research, the starting point of Alveros academic paper is data.

In this case, the data was 283,676 application essays submitted by 93,136 applicants who identified as Latino or Latina. In the first study of its kind, Alveros team used computational analysis to discover patterns across a mass corpus of essays.

By running the essays through relatively simple computer algorithms, the team found, they could correctly predict the gender and income level of an applicant about four out of five times.

In another fascinating finding, the paper showed which words are more likely to be used by different demographic groups male versus female and low-income versus high-income applicants. And the purpose of admissions essays, it turns out, was originally to keep certain students out.

In an interview with the Weekly, Alvero spoke about his findings and what they mean.

AJ Alveros academic career started at San Diego State University but he soon dropped out. He came back to Salinas to lick his wounds and start over. Eventually, Alvero (left) made his way to Stanford University where he is a fourth-year Ph.D. student studying education and data science. (right) Alvero pushed the U.S. Geological Survey to change the name of this intersection in Salinas from Confederate Corners to Springtown.

Weekly: Artificial Intelligence and big data are complicated topics even for people who have grown up with technology. How would you start explaining your research to your grandparents?

Alvero: Lots of studies argue that standardized tests like the SAT and ACT are biased, by race, by social bias, by gender in certain ways, and that we shouldnt use them in college admissions.

So if thats the case, what about the admissions essay? We have this idea that the admissions essay gives students a chance to talk about their true selves. Yet so far, the essays havent been placed under scrutiny, as the test scores have. Thats where my research comes in.

Why would a language performance, like writing an essay, be less biased or more biased than a test score?

How did you get interested in answering that question?

Ive always been interested in language and social issues. I read about the history of college admissions essays, and they were actually designed at Harvard [in the early 20th century] to filter out Jewish applicants. Its pretty incredible. The president of Harvard was one of these old-money Boston families and he decided, We have too many Jewish students on campus.

He realized that all of his old clientele, which are the very wealthy, white Protestant elites of New England those applicants are not passing the entrance exam. So he created the personal statement, introduced extracurricular activities, introduced the letters of rec all these subjective measures to give WASP elite students a chance. It worked. Jewish enrollment was cut in half.

Thats the history. And I thought, well, these essays are still being used widely.

I also reflected on my time as a high school teacher in Miami helping students write these essays. Students from immigrant backgrounds tended to write about certain things. Students who worked with Teach for America teachers, they tended to write about certain things. So I noticed there was a lot of patterning in the types of narratives that students were deploying in their personal statements.

At Stanford, I got really interested in learning about just using text as big data in computational methods of analysis. There have been advances in computational methods to analyze texts.

But in part, it was me being at the right place at the right time.

What do you mean?

The idea of using written texts as a form of data has become a very popular idea at Stanford, you see a lot of researchers in many different departments and fields leveraging text as data.

But the texts in your study are not Wikipedia articles. What you obtained was much more exclusive, even confidential: Nearly 300,000 admissions essays by self-identified Latino applicants. How rare is that and how did you get them?

I dont want to toot my own horn or anything, but its extremely rare. So, to our knowledge in my lab, were the first ones to use these computational methods on a large collection of admissions essays.

What I did was email a couple of admissions officers, and only one of them got back to me. I can only tell you which university off the record.

Deal. I wont reveal more than what it says in your study, that it was a large public university system. Did the data come with strings attached?

They did come with strings attached. My original pitch was, What are the Latinx kids writing about? There are lots of people under this umbrella, I figured I could get a lot of data, and its a category and a group of people that Im very interested in, partly because Im part of that umbrella as a Cuban American.

So, I asked them, Can I get admissions essays written by Latinx students? And they said, Sure, how many do you want? I said, Ill take all of them. Eventually, I learned they had an interest specifically in Latinx student essays in hope of increasing Latinx enrollment.

So they wanted your expertise, meaning that others could have asked and gotten the essays. Youre the one who went ahead and tried.

I think every university wants to get better at reading these essays. And no one wants to be subject to that kind of lawsuit like Harvard. No one wants to face that kind of scrutiny.

You ran the essays through the algorithms and found that they were able to predict something really interesting. Can you tell me about that?

We found that even a relatively simple machine learning algorithm was able to predict the gender of the applicants about 80 percent of the time and whether or not they were above or below the median income, which was a proxy for higher or lower income, about 70-something percent of the time.

Boy Scouts and foreign countries, thats basically what the higher income applicants are writing about (see diagrams, p. 26). And with boys, some of the words most associated with them were hardware, chess, Lego and Rubiks Cube. If we were to survey every single college admissions essay reader in the country and ask, what do you think about if a student wrote about chess? How would you describe that student? They might read chess, Rubiks Cube, hardware, Legos. And they might think, Wow, this is a very intellectual person.

What the data is showing is that are also words that boys are just using much more often than girls. Do our admissions readers realize this? Are they being trained to recognize this? On the flip side, theres makeup and cheerleading. Do people think makeup is also intellectual and very engaging? I dont think so.

I also found it fascinating that girls are talking about being girls, using words like Latina, daughter and female but boys are not bringing up their gender.

Yes, and are our admissions officers being trained for this? Do they even know this? I think the answer is no. Im hoping to connect this research to actually practice in college admission.

An analysis of word frequencies across nearly 300,000 personal essays revealed which words were most characteristic of different demographic groups.The top left were the words favored by female applicants, the top right by male applicants. The bottom left were words used more frequently by higher-income applicants, the bottom right by lower-income applicants. (ELD and ETS are acronyms related to English language learners.) The size of each word reflects the frequency of use.

How would you do it?

Its very complicated and no ones really sure if and how its going to work. But a common practice in college admissions right now is when an application reviewer is looking at test scores and GPAs from an applicant, theyll also have some contextualization.

For example, lets say a kid gets a pretty solid but not fantastic SAT score. How did everyone else at that school do? Maybe that kid didnt get a perfect score but its way better than everyone else did at their school. Then an admissions reviewer could take that into account. That would be the idea: trying to contextualize the essays. Because at the moment, we dont have anything close to that.

What if you took potential insight from the algorithm, and combined that with human insight? Maybe thatll be better. We have got to find out. So thats what Im trying to be the person to find out.

How does this pursuit connect to the fact that you are Latino and were born and raised in Salinas?

I dont want to just straight-up talk smack about Salinas. But there was a lot of prejudice and there were a lot of biases and stereotypes and racism against Mexican people and Central American people.

The way it would work for me was almost like two-factor authentication, like where you first type in your password, but then it needs a code from your phone. Im fair-skinned and have light eyes. And a lot of people will look at me and be like, Oh, yeah, no, youre not Mexican.

But then I start talking, or mention Im Cuban. Ah, you pass the first password. But the second, no ones answering the call. Lots of my friends growing up, they never got past that first step of the password. But for me, I was able to move in and out of the crowd.

Seeing the treatment of Mexican Americans always bothered me so much. And, I always hoped that I could be in a position where I can speak out on it and people would hear me. Im hoping this research can be my first big way to do that.

See the article here:
A researcher from Salinas is using artificial intelligence to make college admissions more equitable. - Monterey County Weekly

Twitter is bringing in more in-stream ads using Artificial Intelligence – Digital Information World

Lately, people have noticed an increase in the number of ads in their Twitter feed (and even on users profile pages). Twitter has slowly been increasing its ad load for some time now, but it seems that in recent weeks, it had increased more than usual.

Twitter was asked about this, and they responded that their team regularly experiments and makes changes to their advertisement experience while also holding up their principles and standards for a high-quality experience for the users. Also adding that they are constantly testing and innovating, and will keep doing it and learn from it.

This means that as Twitter is trying new display methods, users are looking at more ads in their Twitter feed. One reason for this can be Twitters need to address slowing ad spend as a result of COVID-19. In its Q1 report earlier this year, Twitter reported a slight decline in ad spend. It noted that the advertising income reduced by 27% Year-over-Year from March 11th until March 31st. This is because of events around the world canceling and sheltering in place in the US.

Twitter is being affected by fewer businesses in operation because of the lockdowns, specifically from fewer sports and events taking place. In order to generate more revenue, it needs to increase the number of ads while simultaneously capitalizing on increased usage during the outbreak.

There is a 24% increase in usage reported in Q1, meaning that ad exposure opportunity for the remaining operating businesses is present, and with increased usage, Twitter can capitalize on this. So, the users can expect to see more Promoted tweets in-stream while the advertisers can have more opportunities to increase exposure to the Twitter audience which is more active than usual.

Twitter will likely maintain such increases in ads to maximize revenue generation if usage levels remain stable. And if they remain stable, you can bet that Twitter will look to maintain any such increases moving forward, as a means to maximize revenue potential. Additionally, Twitter has announced that it will show users more relevant content, based on their activity.

Twitter explains that in the apps default configuration, the users home timeline will show the top Tweets first. The system uses a machine learning model to predict what tweets will interest them the most based on their activities. The model needs to learn this via training and must be constantly updated due to user interest changing all the time. Twitter also added that they have reduced the time required to refresh this model from 7 days to about a day by redesigning the data logging pipeline. This upgrade will make Twitter feed more up-to-date with rapidly changing interests and time.

Twitter says that their new model is much better as compared to their old model in terms of training and real-world prediction after the refresh of the model. Also adding that newer data and models are favored by users, who come to Twitter for the most up-to-date information.

Although there will be more ads in-stream, it will be more relevant in general, based on the new algorithms. Twitter does seem to give more relevant information (except a few tweets that pass through the filter) based on the topics selected displayed in the Explore tab. It will still interesting to see if it changes with the upcoming updates.

Photo: Omar Marques/SOPA Images/LightRocket via Getty Images

Read next: Twitter Is Testing An Instagram-Inspired Feature To Sift Unwanted Messages

Read the rest here:
Twitter is bringing in more in-stream ads using Artificial Intelligence - Digital Information World

When will artificial intelligence come to the commercial greenhouse industry? – Urban Ag News

Watching news reports on the COVID-19 pandemic one quickly realizes the importance accurate data plays in our everyday lives. Most industries are data-driven, whether this data relates to business management or specific production-related operations.

For the horticulture industry, data is an integral part of ensuring greenhouse facilities operate at maximum capacity. Unfortunately, growers have limited access to the data being collected in their greenhouses and are unable to utilize this data in a way that could help them increase operation efficiency and yields.

The data being collected by greenhouse growers is being siloed, meaning the data is stored in different closed systems, said Ken Tran, founder of Koidra LLC. These closed systems dont communicate with each other and growers do not have a way to unify the data for whatever purpose or whatever analysis they might want to do. This can be greenhouse environmental data, biological data or business management data.

For climate control data, it is not uncommon to have this type of data living in different systems as well. For example, growers can have climate control data such as temperature and relative humidity in one system. The data for lighting supplied by another company may be in a different system. There are many lighting companies that provide their own controls. Most companies that growers are familiar with dont want to expose the data that is being collected in a way that the systems can talk to each other.

Limiting data analysis

Another critical problem with data being siloed is even if the growers data is in one system growers may not be able to do data analysis. In most cases, the only way the data is available is to export it to Excel files, which is very limiting.

Climate control data is collected automatically and put in a system, Tran said. Depending on the type of climate control system that is used, data is collected in a database that is hidden under the interface of the climate control company. Growers are limited by what the climate control company interface will provide.

If growers want to use the data, the systems can only provide limited capability in terms of data analysis. Growers may be only able to look at the data from one seasons crop. But the climate control software will not allow growers to build predictive models from the data. The only way growers can build predictive models is to be able to access the database. Growers should be able to use their data however they want.

Giving growers access to their data

Tran said most growers are dealing with multiple databases depending on the type of data that is being collected.

If a companys expertise is in climate control management it makes sense that the company doesnt focus on biological management data or business management data, he said. The best way to move forward is for these companies to open their data interfaces to the growers so that growers truly own their data. This would allow growers to access the databases so that they can hire third party companies to do data integration.

Even though this is the best way for growers to access their data, its not the only way. Koidra offers data integration service as part of its umbrella autonomous greenhouse product to overcome this problem. It doesnt necessarily require the companies maintaining the data to open their data interfaces.

According to Tran, this situation is not unique to horticulture and is common in industries that have fallen behind in the technology curve. Some industries are more advanced when it comes to being tech savvy. Agriculture and some older manufacturing industries may have issues with the digital transformation curve.

Tran said many climate control companies see the trend toward artificial intelligence (AI) and they want to be able to expand their capabilities to the growers.

The notion of data management and leveraging data analytics and machine learning are new, he said. A few years ago these topics werent even being considered by these companies. I havent yet seen the need for data management. There hasnt been a demand from the growers to have access to this data. Even if they had access to this data what would they do with it? Most growers dont have the capabilities to build their own predictive models.

Some growers would like to work with companies that can do the analytics. Only a very few well-funded indoor vertical farm companies have chosen to develop complete in-house systems so that they can have more control over their data. Many companies want to have more control over their data and would like to do more with their data.

Tran said growers can only truly own their data when:

1. They can store and transfer their data however they want.

2. They can query their data to get better insights however they want.

3. They can use whatever tools on their data as they want.

All of these require a programmatic interface to the data storage systems, which is currently lacking.

Building an autonomous greenhouse

The internet of things (IoT) is a network of interconnected devices that is embedded in sensor software that enables them to collect and exchange data making them responsive.

IoT can be thought of as a system that enables automated, real-time and high-frequency data collection, Tran said. One type of device is a temperature sensor. Using this sensor there wouldnt be a need to have humans collecting and inputting data. The sensor is connected to a network and it can transfer the data to the growers database automatically. It can communicate temperature data to growers or to their systems. IoT can be thought of as systems that enable automated, real-time and high-frequency data collection.

Every business is connected to the internet. With the right data management infrastructure, growers should be able to get the right information at the right time from anywhere and on any device. Once full situational awareness of the business occurs, the business can effectively be managed remotely.

Manual data collection or no data collection at all is the opposite of IoT. Manual data collection is not done in real time, is done infrequently and is expensive to do.

IoT is an enabler for high-speed, high-volume and low-cost data collection, Tran said. This would allow growers to develop AI applications that leverage big data. AI capabilities can only be realized after the right information infrastructure (IA) is created. As the AI community tends to say, There is no AI without IA. The fact that IoT is being adopted heavily in the greenhouse industry makes AI even more attractive.

Tran said what will drive the development of autonomous greenhouses is what greenhouse owners and operators want.

They want higher profits and yields and lower operational costs, he said. During the first International Autonomous Greenhouse Challenge in the Netherlands it was shown that an autonomous greenhouse program can produce higher yields and higher resource usage than expert growers.

During the competition the winning Project Sonoma team, led by Tran, outperformed a team of expert Dutch growers. The Sonoma team produced more than 55 kilograms of cucumbers per square meter. The net profit on the cucumbers for the Sonoma team was 17 percent higher than for the team of Dutch growers.

But not every autonomous greenhouse is efficient.

An autonomous greenhouse can be less efficient than a good grower, Tran said. This was shown by the results of the Autonomous Greenhouse Challenge. The Sonoma team was the only one that outperformed the expert growers. All the other teams did worse than the growers.

All companies want their businesses to be more automated, more scalable and more efficient. This is where AI, built upon rich IoT and crop management data, can help. A good AI program not only provides the value of automation, but higher efficiency as well.

Is the commercial greenhouse industry ready for AI? Tran thinks so.

Its already happening, demonstrated by the Autonomous Greenhouse Challenge, he said. Innovative companies that offer both data integration and AI services can help make that reality faster for greenhouse growers.

For more: Ken Tran, Koidra LLC, (512) 436-3250; ken@koidra.ai.

This article is property of Urban Ag News and was written by David Kuack, a freelance technical writer in Fort Worth, Texas.

See original here:
When will artificial intelligence come to the commercial greenhouse industry? - Urban Ag News