Artificial intelligence reveals hundreds of millions of trees in the Sahara – Newswise

Newswise If you think that the Sahara is covered only by golden dunes and scorched rocks, you aren't alone. Perhaps it's time to shelve that notion. In an area of West Africa 30 times larger than Denmark, an international team, led by University of Copenhagen and NASA researchers, has counted over 1.8 billion trees and shrubs. The 1.3 million km2 area covers the western-most portion of the Sahara Desert, the Sahel and what are known as sub-humid zones of West Africa.

"We were very surprised to see that quite a few trees actually grow in the Sahara Desert, because up until now, most people thought that virtually none existed. We counted hundreds of millions of trees in the desert alone. Doing so wouldn't have been possible without this technology. Indeed, I think it marks the beginning of a new scientific era," asserts Assistant Professor Martin Brandt of the University of Copenhagen's Department of Geosciences and Natural Resource Management, lead author of the study'sscientific article, now published inNature.

The work was achieved through a combination of detailed satellite imagery provided by NASA, and deep learning -- an advanced artificial intelligence method. Normal satellite imagery is unable to identify individual trees, they remain literally invisible. Moreover, a limited interest in counting trees outside of forested areas led to the prevailing view that there were almost no trees in this particular region. This is the first time that trees across a large dryland region have been counted.

The role of trees in the global carbon budget

New knowledge about trees in dryland areas like this is important for several reasons, according to Martin Brandt. For example, they represent an unknown factor when it comes to the global carbon budget:

"Trees outside of forested areas are usually not included in climate models, and we know very little about their carbon stocks. They are basically a white spot on maps and an unknown component in the global carbon cycle," explains Martin Brandt.

Furthermore, the new study can contribute to better understanding the importance of trees for biodiversity and ecosystems and for the people living in these areas. In particular, enhanced knowledge about trees is also important for developing programmes that promote agroforestry, which plays a major environmental and socio-economic role in arid regions.

"Thus, we are also interested in using satellites to determine tree species, as tree types are significant in relation to their value to local populations who use wood resources as part of their livelihoods. Trees and their fruit are consumed by both livestock and humans, and when preserved in the fields, trees have a positive effect on crop yields because they improve the balance of water and nutrients," explains Professor Rasmus Fensholt of the Department of Geosciences and Natural Resource Management.

Technology with a high potential

The research was conducted in collaboration with the University of Copenhagen's Department of Computer Science, where researchers developed the deep learning algorithm that made the counting of trees over such a large area possible.

The researchers show the deep learning model what a tree looks like: They do so by feeding it thousands of images of various trees. Based upon the recognition of tree shapes, the model can then automatically identify and map trees over large areas and thousands of images. The model needs only hours what would take thousands of humans several years to achieve.

"This technology has enormous potential when it comes to documenting changes on a global scale and ultimately, in contributing towards global climate goals. We are motivated to develop this type of beneficial artificial intelligence," says professor and co-author Christian Igel of the Department of Computer Science.

The next step is to expand the count to a much larger area in Africa. And in the longer term, the aim is to create a global database of all trees growing outside forest areas.

###

FACTS:

Excerpt from:
Artificial intelligence reveals hundreds of millions of trees in the Sahara - Newswise

Artificial intelligence and the antitrust case against Google – VentureBeat

Following the launch of investigations last year, the U.S. Department of Justice (DOJ) together with attorney generals from 11 U.S. states filed a lawsuit against Google on Tuesday alleging that the company maintains monopolies in online search and advertising, and violates laws prohibiting anticompetitive business practices.

Its the first antitrust lawsuit federal prosecutors filed against a tech company since the Department of Justice brought charges against Microsoft in the 1990s.

Back then, Google claimed Microsofts practices were anticompetitive, and yet, now, Google deploys the same playbook to sustain its own monopolies, the complaint reads. For the sake of American consumers, advertisers, and all companies now reliant on the internet economy, the time has come to stop Googles anticompetitive conduct and restore competition.

Attorneys general from no Democratic states joined the suit. State attorneys general Democrats and Republicans alike plan to continue on with their own investigations, signaling that more charges or backing from states might be on the way. Both the antitrust investigation completed by a congressional subcommittee earlier this month and the new DOJ lawsuit advocate breaking up tech companies as a potential solution.

The64-page complaint characterizes Google as a monopoly gatekeeper for the internet and spells out the reasoning behind the lawsuit in detail, documenting the companys beginning at Stanford University in the 1990s alongside deals made in the past decade with companies like Apple and Samsung to maintain Googles dominance. Also key to Googles power and plans for the future is access to personal data and artificial intelligence. In this story, we take a look at the myriad of ways in which artificial intelligence plays a role in the antitrust case against Google.

The best place to begin when examining the role AI plays in Googles antitrust case is online search, which is powered by algorithms and automated web crawlers that scour webpages for information. Personalized search results made possible by the collection of personal data started in 2009, and today Google can search for images, videos, and even songs that people hum. Google dominates the $40 billion online search industry, and that dominance acts like a self-reinforcing cycle: More data leads to more training data for algorithms, defense against competition, and more effective advertising.

General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms, the complaint reads. The additional data from scale allows improved automated learning for algorithms to deliver more relevant results, particularly on fresh queries (queries seeking recent information), location-based queries (queries asking about something in the searchers vicinity), and long-tail queries (queries used infrequently).

Search is now primarily conducted on mobile devices like smartphones or tablets. To build monopolies in mobile search and create scale insurmountable to competitors, the complaint states, Google turned to exclusionary agreements with smartphone sellers like Apple and Samsung as well as revenue sharing with wireless carriers. The Apple-Google symbiosis is in fact so important that losing it is referred to as code red at Google, according to the DOJ filing. An unnamed senior Apple employee corresponding with their counterpart at Google said its Apples vision that the two companies operate as if one company. Today, Google accounts for four out of five web searches in the United States and 95% of mobile searches. Last year, Google estimated that nearly half of all search traffic originated on Apple devices, while 15-20% of Apple income came from Google.

Exclusive agreements that put Google apps on mobile devices effectively captured hundreds of millions of users. An antitrust report referenced these data advantages, stating that Googles anticompetitive conduct effectively eliminates rivals ability to build the scale necessary to compete.

In addition to the DOJ report, the antitrust report Congress released earlier this month frequently cites the network effect achieved by Big Tech companies as a significant barrier to entry for smaller businesses or startups. The incumbents have access to large data sets that give them a big advantage, especially when combined with machine learning and AI, the report reads. Companies with superior access to data can use that data to better target users or improve product quality, drawing more users and, in turn, generating more data an advantageous feedback loop.

Network effects often come up in the congressional report in reference to mobile operating systems, public cloud providers, and AI assistants like Alexa and Google Assistant, which improve their machine learning models through the collection of data like voice recordings.

One potential solution the congressional investigation suggested is better data portability to help small businesses compete with tech giants.

One part of maintaining Googles search monopoly, according to the congressional report, is control of emerging search access points. While Google searches began on desktop computers, mobile is king today, and fast emerging are devices like smartwatches, smart speakers, and IoT devices with AI assistants like Alexa, Google Assistant, and Siri. Virtual assistants are using AI to turn speech into text and predict a users intent, becoming a new battleground. An internal Google document declared voice will become the future of search.

The growth of searches via Amazon Echo devices is why a Morgan Stanley analyst previously suggested Google give everyone in the country a free speaker. In the end, he concluded, it would be cheaper for Google to give away hundreds of millions of speakers than to lose its edge to Amazon.

The scale afforded by Android and native Google apps also appears to be a key part of Google Assistants ability to understand or translate dozens of languages and collect voice data across the globe.

Search is primarily done on mobile devices today. Thats what drives the symbiotic relationship between Apple and Google, where Apple receives 20% of its total revenue from Google in exchange for making Google the de facto search engine on iOS phones, which still make up about 60% of the U.S. smartphone market.

The DOJ suit states that Google is concentrating on Google Nest IoT devices and smart speakers because internet searches will increasingly take place using voice orders. The company wants to control the next popular environment for search queries, the DOJ says, whether it be wearable devices like smartwatches or activity monitors from Fitbit, which Google announced plans to acquire roughly one year ago.

Google recognizes that its hardware products also have HUGE defensive value in virtual assistant space AND combatting query erosion in core Search business. Looking ahead to the future of search, Google sees that Alexa and others may increasingly be a substitute for Search and browsers with additional sophistication and push into screen devices,' the DOJ report reads. Google has also harmed competition by raising rivals costs and foreclosing them from effective distribution channels, such as distribution through voice assistant providers, preventing them from meaningfully challenging Googles monopoly in general search services.

In other words, only Google Assistant can get microphone access for a smartphone to respond to a wake word like Hey, Google, a tactic the complaint says handicaps rivals.

AI like Google Assistant also features prominently in the antitrust report a Democrat-led antitrust subcommittee in Congress released, which refers to AI assistants as efforts to lock consumers into information ecosystems. The easiest way to spot this lock-in is when you consider the fact that Google prioritizes YouTube, Apple wants you to use Apple Music, and Amazon wants users to subscribe to Amazon Prime Music.

The congressional report also documents the recent history of Big Tech companies acquiring startups. It alleges that in order to avoid competition from up-and-coming rivals, companies like Google have bought up startups in emerging fields like artificial intelligence and augmented reality.

If you expect a quick ruling by the DC Circuit Court in the antitrust lawsuit against Google, youll be disappointed that doesnt seem at all likely. Taking the 1970s case against IBM and the Microsoft suit in the 1990s as a guide, antitrust cases tend to take years. In fact, its not outside the realm of possibility that this case could still be happening the next time voters pick a president in 2024.

What does seem clear from language used in both US v Google and the congressional antitrust report is that both Democrats and Republicans are willing to consider separating company divisions in order to maintain competitive markets and a healthy digital economy. Whats also clear is that both the Justice Department and antitrust lawmakers in Congress see action as necessary based in part on how Google treats personal data and artificial intelligence.

See more here:
Artificial intelligence and the antitrust case against Google - VentureBeat

Global Artificial Intelligence of Things Markets 2020-2025: Focus on Technology & Solutions – AIoT Solutions Improve Operational Effectiveness and…

Dublin, Oct. 22, 2020 (GLOBE NEWSWIRE) -- The "Artificial Intelligence of Things: AIoT Market by Technology and Solutions 2020 - 2025" report has been added to ResearchAndMarkets.com's offering.

This AIoT market report provides an analysis of technologies, leading companies and solutions. The report also provides quantitative analysis including market sizing and forecasts for AIoT infrastructure, services, and specific solutions for the period 2020 through 2025. The report also provides an assessment of the impact of 5G upon AIoT (and vice versa) as well as blockchain and specific solutions such as Data as a Service, Decisions as a Service, and the market for AIoT in smart cities.

Many industry verticals will be transformed through AI integration with enterprise, industrial, and consumer product and service ecosystems. It is destined to become an integral component of business operations including supply chains, sales and marketing processes, product and service delivery, and support models.

We see AIoT evolving to become more commonplace as a standard feature from big analytics companies in terms of digital transformation for the connected enterprise. This will be realized in infrastructure, software, and SaaS managed service offerings. More specifically, we see 2020 as a key year for IoT data-as-a-service offerings to become AI-enabled decisions-as-a-service-solutions, customized on a per industry and company basis. Certain data-driven verticals such as the utility and energy services industries will lead the way.

As IoT networks proliferate throughout every major industry vertical, there will be an increasingly large amount of unstructured machine data. The growing amount of human-oriented and machine-generated data will drive substantial opportunities for AI support of unstructured data analytics solutions. Data generated from IoT supported systems will become extremely valuable, both for internal corporate needs as well as for many customer-facing functions such as product life-cycle management.

The use of AI for decision making in IoT and data analytics will be crucial for efficient and effective decision making, especially in the area of streaming data and real-time analytics associated with edge computing networks. Real-time data will be a key value proposition for all use cases, segments, and solutions. The ability to capture streaming data, determine valuable attributes, and make decisions in real-time will add an entirely new dimension to service logic.

In many cases, the data itself, and actionable information will be the service. AIoT infrastructure and services will, therefore, be leveraged to achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics, creating a foundation for IoT Data as a Service (IoTDaaS) and AI-based Decisions as a Service.

The fastest-growing 5G AIoT applications involve private networks. Accordingly, the 5GNR market for private wireless in industrial automation will reach $4B by 2025. Some of the largest market opportunities will be AIoT market IoTDaaS solutions. We see machine learning in edge computing as the key to realizing the full potential of IoT analytics.

Select Report Findings:

Key Topics Covered:

1.0 Executive Summary

2.0 Introduction2.1 Defining AIoT2.2 AI in IoT vs. AIoT2.3 Artificial General Intelligence2.4 IoT Network and Functional Structure2.5 Ambient Intelligence and Smart Lifestyles2.6 Economic and Social Impact2.7 Enterprise Adoption and Investment2.8 Market Drivers and Opportunities2.9 Market Restraints and Challenges2.10 AIoT Value Chain2.10.1 Device Manufacturers2.10.2 Equipment Manufacturers2.10.3 Platform Providers2.10.4 Software and Service Providers2.10.5 User Communities

3.0 AIoT Technology and Market3.1 AIoT Market3.1.1 Equipment and Component3.1.2 Cloud Equipment and Deployment3.1.3 3D Sensing Technology3.1.4 Software and Data Analytics3.1.5 AIoT Platforms3.1.6 Deployment and Services3.2 AIoT Sub-Markets3.2.1 Supporting Device and Connected Objects3.2.2 IoT Data as a Service3.2.3 AI Decisions as a Service3.2.4 APIs and Interoperability3.2.5 Smart Objects3.2.6 Smart City Considerations3.2.7 Industrial Transformation3.2.8 Cognitive Computing and Computer Vision3.2.9 Consumer Appliances3.2.10 Domain Specific Network Considerations3.2.11 3D Sensing Applications3.2.12 Predictive 3D Design3.3 AIoT Supporting Technologies3.3.1 Cognitive Computing3.3.2 Computer Vision3.3.3 Machine Learning Capabilities and APIs3.3.4 Neural Networks3.3.5 Context-Aware Processing3.4 AIoT Enabling Technologies and Solutions3.4.1 Edge Computing3.4.2 Blockchain Networks3.4.3 Cloud Technologies3.4.4 5G Technologies3.4.5 Digital Twin Technology and Solutions3.4.6 Smart Machines3.4.7 Cloud Robotics3.4.8 Predictive Analytics and Real-Time Processing3.4.9 Post Event Processing3.4.10 Haptic Technology

4.0 AIoT Applications Analysis4.1 Device Accessibility and Security4.2 Gesture Control and Facial Recognition4.3 Home Automation4.4 Wearable Device4.5 Fleet Management4.6 Intelligent Robots4.7 Augmented Reality Market4.8 Drone Traffic Monitoring4.9 Real-time Public Safety4.10 Yield Monitoring and Soil Monitoring Market4.11 HCM Operation

5.0 Analysis of Important AIoT Companies5.1 Sharp5.2 SAS5.3 DT425.4 Chania Tech Giants: Baidu, Alibaba, and Tencent5.4.1 Baidu5.4.2 Alibaba5.4.3 Tencent5.5 Xiaomi Technology5.6 NVidia5.7 Intel Corporation5.8 Qualcomm5.9 Innodisk5.10 Gopher Protocol5.11 Micron Technology5.12 ShiftPixy5.13 Uptake5.14 C3 IoT5.15 Alluvium5.16 Arundo Analytics5.17 Canvass Analytics5.18 Falkonry5.19 Interactor5.20 Google5.21 Cisco5.22 IBM Corp.5.23 Microsoft Corp.5.24 Apple Inc.5.25 Salesforce Inc.5.26 Infineon Technologies AG5.27 Amazon Inc.5.28 AB Electrolux5.29 ABB Ltd.5.30 AIBrian Inc.5.31 Analog Devices5.32 ARM Limited5.33 Atmel Corporation5.34 Ayla Networks Inc.5.35 Brighterion Inc.5.36 Buddy5.37 CloudMinds5.38 Cumulocity GmBH5.39 Cypress Semiconductor Corp5.40 Digital Reasoning Systems Inc.5.41 Echelon Corporation5.42 Enea AB5.43 Express Logic Inc.5.44 Facebook Inc.5.45 Fujitsu Ltd.5.46 Gemalto N.V.5.47 General Electric5.48 General Vision Inc.5.49 Graphcore5.50 H2O.ai5.51 Haier Group Corporation5.52 Helium Systems5.53 Hewlett Packard Enterprise5.54 Huawei Technologies5.55 Siemens AG5.56 SK Telecom5.57 SoftBank Robotics5.58 SpaceX5.59 SparkCognition5.60 STMicroelectronics5.61 Symantec Corporation5.62 Tellmeplus5.63 Tend.ai5.64 Tesla5.65 Texas Instruments5.66 Thethings.io5.67 Veros Systems5.68 Whirlpool Corporation5.69 Wind River Systems5.70 Juniper Networks5.71 Nokia Corporation5.72 Oracle Corporation5.73 PTC Corporation5.74 Losant IoT5.75 Robert Bosch GmbH5.76 Pepper5.77 Terminus5.78 Tuya Smart

6.0 AIoT Market Analysis and Forecasts 2020 - 20256.1 Global AIoT Market Outlook and Forecasts6.1.1 Aggregate AIoT Market 2020 - 20256.1.2 AIoT Market by Infrastructure and Services 2020 - 20256.1.3 AIoT Market by AI Technology 2020 - 20256.1.4 AIoT Market by Application 2020 - 20256.1.5 AIoT in Consumer, Enterprise, Industrial, and Government 2020 - 20256.1.6 AIoT Market in Cities, Suburbs, and Rural Areas 2020 - 20256.1.7 AIoT in Smart Cities 2020 - 20256.1.8 IoT Data as a Service Market 2020 - 20256.1.9 AI Decisions as a Service Market 2020 - 20256.1.10 Blockchain Support of AIoT 2020 - 20256.1.11 AIoT in 5G Networks 2020 - 20256.2 Regional AIoT Markets 2020 - 2025

7.0 Conclusions and Recommendations7.1 Advertisers and Media Companies7.2 Artificial Intelligence Providers7.3 Automotive Companies7.4 Broadband Infrastructure Providers7.5 Communication Service Providers7.6 Computing Companies7.7 Data Analytics Providers7.8 Immersive Technology (AR, VR, and MR) Providers7.9 Networking Equipment Providers7.10 Networking Security Providers7.11 Semiconductor Companies7.12 IoT Suppliers and Service Providers7.13 Software Providers7.14 Smart City System Integrators7.15 Automation System Providers7.16 Social Media Companies7.17 Workplace Solution Providers7.18 Enterprise and Government

For more information about this report visit https://www.researchandmarkets.com/r/aw2mh9

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Read more:
Global Artificial Intelligence of Things Markets 2020-2025: Focus on Technology & Solutions - AIoT Solutions Improve Operational Effectiveness and...

The Military’s Mission: Artificial Intelligence in the Cockpit – The Cipher Brief

The Defense Advanced Research Projects Agency (DARPA) recently hosted the AlphaDogfight Trials putting artificial intelligence technology from eight different organizations up against human pilots. In the end, the winning AI, made by Heron Systems, faced off against a human F-16 pilot in a simulated dogfight with the AI system scoring a 5-0 victory against the human pilot.

The simulation was part of an effort to better understand how to integrate AI systems in piloted aircraft in part, to increase the lethality of the Air Force. The event also re-launched questions about the future of AI in aviation technology and how human pilots will remain relevant in an age of ongoing advancements in drone and artificial intelligence technology.

The Background:

The Experts:

The Cipher Brief spoke with our expert, General Philip M. Breedlove (Ret.) and Tucker Cinco Hamilton to get their take on the trials and the path ahead for AI in aviation.

General Philip M. Breedlove, Former Supreme Allied Commander, NATO & Command Pilot

Gen. Breedlove retired as NATO Supreme Allied Commander and is a command pilot with 3,500 flying hours, primarily in the F-16. He flew combat missions in Operation Joint Forge/Joint Guardian. Prior to his position as SACEUR, he served as Commander, U.S. Air Forces in Europe; Commander, U.S. Air Forces Africa; Commander, Air Component Command, Ramstein; and Director, Joint Air Power Competence Centre, Kalkar, Germany.

Lt. Col. Tucker Cinco Hamilton, Director, Dept. of the Air Force AI Accelerator at MIT

Cinco Hamilton is Director, Department of the Air Force-MIT Accelerator and previously served as Director of the F-35 Integrated Test Force at Edwards AFB, responsible for the developmental flight test of the F-35. He has logged over 2,100 hours as a test pilot in more than 30 types of aircraft.

How significant was this test between AI and human pilots?

Tucker Cinco Hamilton: It was significant along the same lines as whenDeepMind Technologies AlphaGo won the game Go against a grand-master. It was animportant moment that revealedtechnological capability, but it must be understood in the context of the demonstration. Equally, it did not prove that fighter pilots are no longer needed on the battlefield. What I hope people tookaway from the demonstration was that AI/ML technology is immensely capable andvitally important to understand and cultivate; that with an ethical and focused developmental approach we can bolster the human-machine interaction.

General Breedlove: Technology is moving fast, but in some cases, policy might not move so fast. For instance, technology exists now to put sensors on these aircrafts that are better than the human eye. They can see better. They can see better in bad conditions. And especially when you start to layer a blend of visual, radar, and infrared sensing together, it is my belief that we can actually achieve a more reliable discerning capability than the human eye. I do not believe that our limitations are going to be on the ability of the machine to do what it needs to do. The real limitations are going to be on what we allow it to do in a policy format.

How will fighter pilots of the future think about data and technology in the cockpit?

General Breedlove: Some folks believe that were never going to move forward with this technology because fighter pilots dont want to give up the control. I think for most young fighter pilots and for most of the really savvy older fighter pilots, thats not true. We want to be effective, efficient, lethal killing machines when our nation needs to us to be. If we can get into an engagement where we can use these capabilities to make us more effective and more efficient killing machines, then I think youre going to see people, young people, and even people like me, absolutely embracing it.

Tucker Cinco Hamilton: I think the future fighter aircraft will be manned, yet linked into AI/ML powered autonomous systems that bolster the fighter pilots battlefield safety, awareness, and capability. The future I see is one in which an operator is still fully engaged with battlefield decision making, yet being supported by technology through human-machine teaming.

As we develop and integrate AI/ML capability we must do so ethically. This is an imperative. Our warfighter and our society deserve transparent, ethically curated, and ethically executed algorithms. In addition, data must be transparently and ethicallycollected and used.BeforeAI/ML capability fullymakes its way into combat applicationswe need to have established a strong and thoughtful ethicalfoundation.

Looking Ahead:

General Breedlove: Humans are training machines to do things, and machines are executing what theyve been trained to do, as opposed to actually making independent, non-human aided decisions. I do believe were in a timeframe now where there may be a person in the loop in certain parts of the engagement, but were probably not very far off from a point in time when the human says, Yep, thats the target. Hit it. Or the human takes the aircraft to a point where only the bad element is in front of it, and the decision concerning collateral damage has already been made, and then the human turns it completely over. But to the high-end extreme of, launch an airplane and then see what happens next, kind of scenario, I think were still a long way away from that. I think there are going to be humans in the engagement loop for a long time.

Tucker Cinco Hamilton: Autonomous systems are here to stay. Whether helping manage our engine operation or saving us from ground collision with the Automatic Ground Collision Avoidance System. As aircraft software continues to become more agile, these autonomous systems will play a part in currently fielded physical systems. This type of advancement is important and needed. However, AI/ML powered autonomous systems havelimitations, and thats exactly where the operator comes in. We need to focus on creatingcapability that bolsters our fighter pilots,allowing them to best prosecute the attack, not remove them from the cockpit. If that is through keeping them safe, or pinpointing/identifying the correct target, helping alert them of incoming threats, or garnering knowledge of the battlefield its all about human-machine teaming. That teaming isexactly what the recent DARPA demonstration was about, proving that an AI powered system can help in situations even as dynamic as dogfighting.

Cipher Brief Intern Ben McNally contributed research for this report

Read more from General Breedlove (Ret.) on the future of AI in the cockpit exclusively in The Cipher Brief

Read more expert-driven national security insight, perspective and analysis in The Cipher Brief

See original here:
The Military's Mission: Artificial Intelligence in the Cockpit - The Cipher Brief

Artificial intelligence anticipates how instruments are used during surgery – Innovation Origins

In the operating theater of the future, computer-based assistance systems will make work processes simpler and safer and thereby play a much greater role than today. However, such support features are only possible if computers are able to anticipate important events in the operating room and provide the right information at the right time, explains Prof. Stefanie Speidel. She is head of the Department of Translational Surgical Oncology at the National Center for Tumor Diseases Dresden (NCT/UCC) in Germany.

Together with the Centre for Tactile Internet with Human-in-the-loop (CeTI) at TU Dresden, she has developed a method that uses artificial intelligence (AI) to enable computers to anticipate the usage of surgical instruments before they are used.

This kind of system does not just provide an important basis for the use of autonomous robotic systems that could take over simple minor tasks in the operating theater, such as blood aspiration. It could also issue early warnings of complications if these are inherent to the use of a particular instrument. Furthermore, it would increase efficiency where preparing instruments is concerned. However, our vision is not to replace the surgeon with a robot or other assistants. The intelligent systems should merely act as a helping hand and lighten the load for both the doctor and the entire surgical team, says Prof. Jrgen Weitz, Managing Director at NCT/UCC and Director of the Clinic for Visceral, Thoracic and Vascular Surgery at the University Hospital Carl Gustav Carus in Dresden.

In order to teach computers how to anticipate the use of surgical instruments on a situation-specific basis a few minutes before they are actually put to use, scientists at NCT/UCC and CeTI used an artificial neural network that mimics the human ability to learn by example. By using a continuous analysis of video images of a surgical procedure, the usage of certain instruments was shown a few minutes before they were actually used. They then trained the neural network with 60 videos of gall bladder removal surgery. These were recorded by default with a laparoscope in the abdomen. Five different instruments were highlighted in these videos.

Afterwards the neural network had to demonstrate its knowledge on 20 more videos without any markers. The scientists were able to verify that the system had made important advances in learning. Plus, in many cases it was able to correctly anticipate how the instruments would be used.

Compared to other methods, this neural network proved to be much more suitable for practical applications. Consequently, this also means that is capable of solving complex tasks. Other methods treat the timing of a specific situation as a matter of routine and the network just needs to decide between various possible situations. In contrast, we have been able to show that an artificial neural network with specific adaptations and a suitably formulated mathematical function is capable of making sensible assessments about the type of instrument that should be selected and the time frame of its application with a minimum of coding effort, says Dominik Rivoir from the Department of Translational Surgical Oncology at NCT/UCC, first author of the study presented at the International Conference on Medical Image Computing & Computer-Assisted Intervention (MICCAI).

Next, the scientists want to refine the method and add more data sets to the neural network. One focus in this is on surgical videos that show more severe bleeding. Using this image data, the network should be able to learn even better when hemorrhages need to be aspirated with a special instrument. In the presented study, the researchers were already able to show that the network interpreted, for example, the appearance of a clamp for clamping a blood vessel with a high degree of accuracy as a characteristic. This way, it was abe to anticipate the use of scissors soon afterward. In future, this could serve as a basis for timing the use of robot-guided aspiration instruments or for anticipating complications.

The National Center for Tumor Diseases Dresden (NCT/UCC) is a joint venture of the German Cancer Research Center (DKFZ), the University Hospital Carl Gustav Carus Dresden, the Medical Faculty Carl Gustav Carus of the TU Dresden, and the Helmholtz Center Dresden-Rossendorf (HZDR).

Title image: Autonomous robotic systems and other intelligent assistance systems will provide enhanced support for the surgical team in the future. NCT/UCC/Andr Wirsig

Read the original:
Artificial intelligence anticipates how instruments are used during surgery - Innovation Origins

Visual Artificial Intelligence on the Edge of Revolutionizing Retail – Loss Prevention Magazine

We are laser-focused on continuous improvements to customers experience across our stores. By leveraging Everseens Visual AI and machine-learning technology, were not only able to remove friction for the customer, but we can also remove controllable costs from the business and redirect those resources to improving the customer experience even more. Mike Lamb, LPC, Krogers VP of Asset Protection

This post was inspired by a recent Kroger article announcing the deployment of visual artificial intelligence (AI) in 2,500 stores and new IHL Group edge computing research. Multiple technological trends have been converging for some time, and their combination is leading to transformative store operations improving solutions.

By 2021, one billion video cameras will be deployed around the world. Endless possibilities in creating immersive consumer experiences emerge when artificial intelligence and machine learning are coupled with these visual data gathering devices.

COVID-19 has become a disruptive accelerator of digital transformation trends that were already underway. It takes 66 days or approximately two months to form a new permanent habit. New shopping journey habits have emerged during the pandemic that will require intensified analysis of millions of data inputs to both protect transactions and remove negative experience friction.

What are some of the leading visual AI or computer vision applications today? In retail, whats the return on investment (ROI)? What makes these technologies critical to the future of retail?

Computer vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objectsand then react to what they see.

This visual AI technology delivers valuable insights that dramatically improves decision making capabilities. My latest edition of the continuously updated Disruptive Future of Retail presentation includes this chart summarizing selected innovative applications.

In a pre-pandemic research report published by Fortune Business Insights, the retail AI market size was valued at $2,306.8 million in 2018 and will grow to $23,426.3 million by 2026. Computer vision and machine learning are key innovation drivers for this segment.

Fully expect the market size and the value of visual AI applications to increase because of COVID-19.

Data is exploding in the retail industry. Walmart as one example generates more than 1 million customer transactions every hour, feeding databases estimated at more than 2.5 petabytesequivalent to 167 times the books in the US Library of Congress.

In all industries, Internet-of-Things (IoT) connected devices are adding substantial amounts of data to the mix. In 2020, machine-generated data will account for over 40 percent of internet data.

The major cloud providers are already coming to the conclusion to distribute workloads to the appropriate edge where they run best. As the IHL Group points out in their latest research, edge computing is critical to retails success in this decade. Example applications that they point to in their analysis are the very important new shopping journeys that have been accelerated by the pandemic. Note the margin challenges shown above when these solutions are not optimized.

Edge system architecture delivers substantial margin improvement benefits to these new retail services. Visual artificial intelligence in the cloud and at the edge plus deployment of instore sensors will dramatically improve analysis and decision-making capabilities throughout the physical store.

The bottom line is that the retailers that not only survive and thrive in the next decade will be those that are able to apply artificial and machine learning to operational data at the store level. Yes, e-commerce is a key part of retails growth, but the key advantage that retailers have over pure-play e-commerce competitors is the stores and proximity to the customer.

The last two years have been very rewarding in working with leading retailers and technology providers in driving the future of retail.

Having spent a substantial portion of my career in point-of-sale, it is still today one of the areas that I follow as it is often that last moment of truth in engaging the consumer for both positive and negative results.

Applying visual artificial intelligence at point-of-sale is already delivering substantial positive results. In deployments protecting over $400 billion in retailers revenue across 75,000-plus checkout lanes, the average sales uplift has been 0.5 percent and the margin increase a substantial 20 percent.

POS devices are only the beginning in whats possible in terms of measurable operational improvements. The future of retail includes digitally supported leadership branding coupled with hyper-personalized immersive consumer experiences across the entire store.

Visual AI and edge computing are critical technologies that will deliver frictionless commerce and optimize consumer journeys whose importance has dramatically increased because of COVID-19. We are on the edge of revolutionizing the future of retail.

For additional retail, technology, and leadership information, visit http://www.tonydonofrio.com.

Read the original post:
Visual Artificial Intelligence on the Edge of Revolutionizing Retail - Loss Prevention Magazine

Helios Visions Partners with Thornton Tomasetti’s T2D2 to Provide Artificial Intelligence-Powered Drone Solution for Facade Inspection – PRNewswire

CHICAGO, Oct. 20, 2020 /PRNewswire/ --Drone services company Helios Visions (https://www.heliosvisions.com) has joined forces with T2D2 (http://www.t2d2.ai), a software as a service (SaaS) platform that uses artificial intelligence (AI) to identify and assess damage and deterioration to building envelopes and structures to provide AI-powered drone facade inspection services.

Together, Helios Visions and T2D2 will provide a robust end-to-end solution for facade condition assessment. Using the latest in drone and AI technology, the program helps support critical inspections and significantly enhances visual inspections. It also makes it easier, faster, safer and less costly to inspect structures.

"The use of drones for high-rise building faade inspections is faster and can be as much as 50% cheaper than traditional methods, which require expensive scaffolding, drops and lifts," Helios Visions Co-founder Ted Parisot said. "WithT2D2, wecan streamline thefacade inspection process, and greatly improve planning and decision-making for building owners and property managers. More frequent assessment of building conditions can increase safety and decrease repair costs by spotting problems before they require expensive and invasive solutions."

T2D2, developed within Thornton Tomasetti's CORE studio incubator and commercialized through the firm's TTWiiN accelerator, uses data from Thornton Tomasetti's more than 50 years of building inspection and forensic investigation work as well as detailed drone imagery provided by Helios Visions.

"The detailed drone images provided by Helios Visions allow T2D2's artificial intelligence programs to quickly and accurately identify any issues that may exist in a building's facade. We are excited for the ongoing partnership between T2D2 and Helios Visions, which will enable the AI program to continuously learn as more drone photometry is fed into the system," said Thornton Tomasetti Director of CORE AI and T2D2 Founder and CEO Badri Hiriyur.

In late September, T2D2 was one of four winners in the New York City Department of Buildings' first-ever "Hack the Building Code" Innovation Challenge, which was created to highlight ideas on how to improve building safety and modernize the development process in New York City.

About Helios Visions

Helios Visions is a safety-oriented drone services company specializing indrone based facadeinspection,drone mapping,drone photos and video, and recently became the first drone servicescompany in Chicago to receive an FAA waiver to fly over people. Helios Visions is a member of the

CompTIA Drone Advisory Counciland is fully compliant with FAA drone regulations with an extensive portfolioof successful client projects. Additional information: call +1 (312) 999-0071, visitHelios Visions

About T2D2

T2D2 is a self-learning, AI-based software-as-a-service platform that automatically detects visible damage in a variety of building materials. It expeditescondition assessments, saving time and money, and allows for more frequent assessments to detect and repair damage before it escalates.T2D2's superpowered algorithms were trained onThornton Tomasetti's massive multi-year forensics database. For more information, go to T2D2.ai,or call +1 917-661-7800

About Thornton Tomasetti

Thornton Tomasetti applies engineering and scientific principles to solve the world's challenges starting with yours. An independent organization of creative thinkers and innovative doers collaborating from offices worldwide, our mission is to bring our clients' ideas to life and, in the process, lay the groundwork for a better, more resilient future. For more information visit http://www.ThorntonTomasetti.com or connect with us on LinkedIn, Twitter, Instagram, Facebook, Vimeo or YouTube.

Media Contact:Ted Parisot+1-312-999-0071[emailprotected]

SOURCE Helios Visions Drone Services

Helios Visions

The rest is here:
Helios Visions Partners with Thornton Tomasetti's T2D2 to Provide Artificial Intelligence-Powered Drone Solution for Facade Inspection - PRNewswire

FDA highlights the need to address bias in AI – Healthcare IT News

The U.S. Food and Drug Administration on Thursday convened a public meeting of its Patient Engagement Advisory Committee to discuss issues regarding artificial intelligence and machine learning in medical devices.

"Devices using AI and ML technology will transform healthcare delivery by increasing efficiency in key processes in the treatment of patients," said Dr. Paul Conway, PEAC chair and chair of policy and global affairs of the American Association of Kidney Patients.

As Conway and others noted during the panel, AI and ML systems may have algorithmic biases and lack transparency potentially leading, in turn, to an undermining of patient trust in devices.

Medical device innovation has already ramped up in response to the COVID-19 crisis, with Center for Devices and Radiological Health Director Dr. Jeff Shuren noting that 562 medical devices have already been granted emergency use authorization by the FDA.

It's imperative, said Shuren, that patients' needs be considered as part of the creation process.

"We continue to encourage all members of the healthcare ecosystem to strive to understand patients' perspective and proactively incorporate them into medical device development, modification and evaluation," said Shuren. "Patients are truly the inspiration for all the work we do."

"Despite the global challenges with the COVID-19 public health emergency ... the patient's voice won't be stopped," Shuren added. "And if anything, there is even more reason for it to be heard."

However, said Pat Baird, regulatory head of global software standards at Philips, facilitating patient trust also means acknowledging the importance of robust and accurate data sets.

"To help support ourpatients, we need to become more familiar with them, their medical conditions, their environment, and their needs and wantsto be able to better understand the potentially confounding factors that drive some of the trends in the collected data," said Baird.

"An algorithm trained on one subset of the population might not be relevant for a different subset," Baird explained.

For instance, if a hospital needed a device that would serve its population of seniors at a Florida retirement community, an algorithm trained on recognizing healthcare needs of teens in Maine would not be effective.Not every population will have the same needs.

"This bias in the data is not intentional, but can be hard to identify," he continued. He encouraged the development of a taxonomy of bias types that would be made publicly available.

Ultimately, he said, people won't use what they don't trust. "We need to use our collective intelligence to help produce better artificial intelligence populations," he said.

Captain Terri Cornelison, chief medical officer and director for thehealth of women at CDRH, noted that demographic identifiers can be medically significant due to genetics and social determinants of health, among other factors.

"Science is showing us that these are not just categorical identifiers but actually clinically relevant," Cornelison said.

She pointed out that a clinical study that does not identify patients' sex may mask different results for people with different chromosomes.

"In many instances, AI and ML devices may be learning a worldview that is narrow in focus, particularly in the available training data, if the available training data do not represent a diverse set of patients," she said.

"More simply, AI and ML algorithms may not represent you if the data do not include you," she said.

"Advances in artificial intelligence are transforming our health systems and daily lives," Cornelison continued. "Yet despite these significant achievements, most ignore the sex, gender, age, race [and] ethnicity dimensions and their contributions to health and disease differences among individuals."

The committee also examined how informed consent might play a role in algorithmic training.

"If I give my consent to be treated by an AI/ML device, I have the right to know whether there were patients like me ... in the data set," said Bennet Dunlap, a health communications consultant. "I think the FDA should not be accepting or approving a medical device that does not have patient engagement" of the kind outlined in committee meetings, he continued.

"You need to know what your data is going to be used for," he reiterated. "I have white privilege. I can just assume old white guys are in [the data sets]. That's where everybody starts. But that should not be the case."

Dr. Monica Parker, assistant professor in neurology and education core member of the Goizueta Alzheimers Disease Research Center at Emory University, pointed out that diversifying patient data requires turning to trusted entities within communities.

"If people are developing these devices, in the interest of being more broadly diverse, is there some question about where these things were tested?" She raised the issue of testing taking place in academic medical centers or technology centers on the East or West Coast, versus "real-world data collection from hospitals that may be using some variation of the device for disease process.

"Clinicians who are serving the population for which the device is needed" provide accountability and give the device developer a better sense of whothey're treating, Parker said. She also reminded fellow committee members that members of different demographic groups are not uniform.

Philip Rutherford, director of operation at Faces and Voices Recovery, pointed out that it's not just enough to prioritize diversity in data sets.The people in charge of training the algorithm must also not be homogenous.

"If we want diversity in our data, we have to seek diversity in the people that are collecting the data," said Rutherford.

The committee called on the FDA to take a strong role in addressing algorithmic bias in artificial intelligence and machine learning.

"At the end of the day, diversity validation and unconscious bias all these things can be addressed if there's strong leadership from the start," said Conway.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

See the rest here:
FDA highlights the need to address bias in AI - Healthcare IT News

The Increasing Role of Artificial Intelligence in Health Care: Will Ro | IJGM – Dove Medical Press

Abdullah Shuaib1,, Husain Arian,1 Ali Shuaib2

1Department of General Surgery, Jahra Hospital, Jahra, Kuwait; 2Biomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, Kuwait City, Kuwait

Dr Abdullah Shuaib passed away on July 21, 2020

Correspondence: Ali ShuaibBiomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, Kuwait City, KuwaitTel +965 24636786Email ali.shuaib@ku.edu.kw

Abstract: Artificial intelligence (AI) pertains to the ability of computers or computer-controlled machines to perform activities that demand the cognitive function and performance level of the human brain. The use of AI in medicine and health care is growing rapidly, significantly impacting areas such as medical diagnostics, drug development, treatment personalization, supportive health services, genomics, and public health management. AI offers several advantages; however, its rampant rise in health care also raises concerns regarding legal liability, ethics, and data privacy. Technological singularity (TS) is a hypothetical future point in time when AI will surpass human intelligence. If it occurs, TS in health care would imply the replacement of human medical practitioners with AI-guided robots and peripheral systems. Considering the pace at which technological advances are taking place in the arena of AI, and the pace at which AI is being integrated with health care systems, it is not be unreasonable to believe that TS in health care might occur in the near future and that AI-enabled services will profoundly augment the capabilities of doctors, if not completely replace them. There is a need to understand the associated challenges so that we may better prepare the health care system and society to embrace such a change if it happens.

Keywords: artificial intelligence, technological singularity, health care system

This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License.By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.

Read the original post:
The Increasing Role of Artificial Intelligence in Health Care: Will Ro | IJGM - Dove Medical Press

The Next Generation Of Artificial Intelligence – Forbes

AI legend Yann LeCun, one of the godfathers of deep learning, sees self-supervised learning as the ... [+] key to AI's future.

The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights three emerging areas within AI that are poised to redefine the fieldand societyin the years ahead. Study up now.

The dominant paradigm in the world of AI today is supervised learning. In supervised learning, AI models learn from datasets that humans have curated and labeled according to predefined categories. (The term supervised learning comes from the fact that human supervisors prepare the data in advance.)

While supervised learning has driven remarkable progress in AI over the past decade, from autonomous vehicles to voice assistants, it has serious limitations.

The process of manually labeling thousands or millions of data points can be enormously expensive and cumbersome. The fact that humans must label data by hand before machine learning models can ingest it has become a major bottleneck in AI.

At a deeper level, supervised learning represents a narrow and circumscribed form of learning. Rather than being able to explore and absorb all the latent information, relationships and implications in a given dataset, supervised algorithms orient only to the concepts and categories that researchers have identified ahead of time.

In contrast, unsupervised learning is an approach to AI in which algorithms learn from data without human-provided labels or guidance.

Many AI leaders see unsupervised learning as the next great frontier in artificial intelligence. In the words of AI legend Yann LeCun: The next AI revolution will not be supervised. UC Berkeley professor Jitenda Malik put it even more colorfully: Labels are the opium of the machine learning researcher.

How does unsupervised learning work? In a nutshell, the system learns about some parts of the world based on other parts of the world. By observing the behavior of, patterns among, and relationships between entitiesfor example, words in a text or people in a videothe system bootstraps an overall understanding of its environment. Some researchers sum this up with the phrase predicting everything from everything else.

Unsupervised learning more closely mirrors the way that humans learn about the world: through open-ended exploration and inference, without a need for the training wheels of supervised learning. One of its fundamental advantages is that there will always be far more unlabeled data than labeled data in the world (and the former is much easier to come by).

In the words of LeCun, who prefers the closely related term self-supervised learning: In self-supervised learning, a portion of the input is used as a supervisory signal to predict the remaining portion of the input....More knowledge about the structure of the world can be learned through self-supervised learning than from [other AI paradigms], because the data is unlimited and the amount of feedback provided by each example is huge.

Unsupervised learning is already having a transformative impact in natural language processing. NLP has seen incredible progress recently thanks to a new unsupervised learning architecture known as the Transformer, which originated at Google about three years ago. (See #3 below for more on Transformers.)

Efforts to apply unsupervised learning to other areas of AI remain at earlier stages, but rapid progress is being made. To take one example, a startup named Helm.ai is seeking to use unsupervised learning to leapfrog the leaders in the autonomous vehicle industry.

Many researchers see unsupervised learning as the key to developing human-level AI. According to LeCun, mastering unsupervised learning is the greatest challenge in ML and AI of the next few years.

One of the overarching challenges of the digital era is data privacy. Because data is the lifeblood of modern artificial intelligence, data privacy issues play a significant (and often limiting) role in AIs trajectory.

Privacy-preserving artificial intelligencemethods that enable AI models to learn from datasets without compromising their privacyis thus becoming an increasingly important pursuit. Perhaps the most promising approach to privacy-preserving AI is federated learning.

The concept of federated learning was first formulated by researchers at Google in early 2017. Over the past year, interest in federated learning has exploded: more than 1,000 research papers on federated learning were published in the first six months of 2020, compared to just 180 in all 2018.

The standard approach to building machine learning models today is to gather all the training data in one place, often in the cloud, and then to train the model on the data. But this approach is not practicable for much of the worlds data, which for privacy and security reasons cannot be moved to a central data repository. This makes it off-limits to traditional AI techniques.

Federated learning solves this problem by flipping the conventional approach to AI on its head.

Rather than requiring one unified dataset to train a model, federated learning leaves the data where it is, distributed across numerous devices and servers on the edge. Instead, many versions of the model are sent outone to each device with training dataand trained locally on each subset of data. The resulting model parameters, but not the training data itself, are then sent back to the cloud. When all these mini-models are aggregated, the result is one overall model that functions as if it had been trained on the entire dataset at once.

The original federated learning use case was to train AI models on personal data distributed across billions of mobile devices. As those researchers summarized: Modern mobile devices have access to a wealth of data suitable for machine learning models....However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center....We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates.

More recently, healthcare has emerged as a particularly promising field for the application of federated learning.

It is easy to see why. On one hand, there are an enormous number of valuable AI use cases in healthcare. On the other hand, healthcare data, especially patients personally identifiable information, is extremely sensitive; a thicket of regulations like HIPAA restrict its use and movement. Federated learning could enable researchers to develop life-saving healthcare AI tools without ever moving sensitive health records from their source or exposing them to privacy breaches.

A host of startups has emerged to pursue federated learning in healthcare. The most established is Paris-based Owkin; earlier-stage players include Lynx.MD, Ferrum Health and Secure AI Labs.

Beyond healthcare, federated learning may one day play a central role in the development of any AI application that involves sensitive data: from financial services to autonomous vehicles, from government use cases to consumer products of all kinds. Paired with other privacy-preserving techniques like differential privacy and homomorphic encryption, federated learning may provide the key to unlocking AIs vast potential while mitigating the thorny challenge of data privacy.

The wave of data privacy legislation being enacted worldwide today (starting with GDPR and CCPA, with many similar laws coming soon) will only accelerate the need for these privacy-preserving techniques. Expect federated learning to become an important part of the AI technology stack in the years ahead.

We have entered a golden era for natural language processing.

OpenAIs release of GPT-3, the most powerful language model ever built, captivated the technology world this summer. It has set a new standard in NLP: it can write impressive poetry, generate functioning code, compose thoughtful business memos, write articles about itself, and so much more.

GPT-3 is just the latest (and largest) in a string of similarly architected NLP modelsGoogles BERT, OpenAIs GPT-2, Facebooks RoBERTa and othersthat are redefining what is possible in NLP.

The key technology breakthrough underlying this revolution in language AI is the Transformer.

Transformers were introduced in a landmark 2017 research paper. Previously, state-of-the-art NLP methods had all been based on recurrent neural networks (e.g., LSTMs). By definition, recurrent neural networks process data sequentiallythat is, one word at a time, in the order that the words appear.

Transformers great innovation is to make language processing parallelized: all the tokens in a given body of text are analyzed at the same time rather than in sequence. In order to support this parallelization, Transformers rely heavily on an AI mechanism known as attention. Attention enables a model to consider the relationships between words regardless of how far apart they are and to determine which words and phrases in a passage are most important to pay attention to.

Why is parallelization so valuable? Because it makes Transformers vastly more computationally efficient than RNNs, meaning they can be trained on much larger datasets. GPT-3 was trained on roughly 500 billion words and consists of 175 billion parameters, dwarfing any RNN in existence.

Transformers have been associated almost exclusively with NLP to date, thanks to the success of models like GPT-3. But just this month, a groundbreaking new paper was released that successfully applies Transformers to computer vision. Many AI researchers believe this work could presage a new era in computer vision. (As well-known ML researcher Oriol Vinyals put it simply, My take is: farewell convolutions.)

While leading AI companies like Google and Facebook have begun to put Transformer-based models into production, most organizations remain in the early stages of productizing and commercializing this technology. OpenAI has announced plans to make GPT-3 commercially accessible via API, which could seed an entire ecosystem of startups building applications on top of it.

Expect Transformers to serve as the foundation for a whole new generation of AI capabilities in the years ahead, starting with natural language. As exciting as the past decade has been in the field of artificial intelligence, it may prove to be just a prelude to the decade ahead.

Read the rest here:
The Next Generation Of Artificial Intelligence - Forbes