Artificial Intelligence Will Soon Shape Themselves, and Us – Medium

Image: Yuichiro Chino/Getty Images

A future where were all replaced by artificial intelligence may be further off than experts currently predict, but the readiness with which we accept the notion of our own obsolescence says a lot about how much we value ourselves. The long-term danger is not that we will lose our jobs to robots. We can contend with joblessness if it happens. The real threat is that well lose our humanity to the value system we embed in our robots, and that they in turn impose on us.

Computer scientists once dreamed of enhancing the human mind through technology, a field of research known as intelligence augmentation. But this pursuit has been largely surrendered to the goal of creating artificial intelligence machines that can think for themselves. All were really training them to do is manipulate our behavior and engineer our compliance. Figure has again become ground.

We shape our technologies at the moment of conception, but from that point forward they shape us. We humans designed the telephone, but from then on the telephone influenced how we communicated, conducted business, and conceived of the world. We also invented the automobile, but then rebuilt our cities around automotive travel and our geopolitics around fossil fuels. While this axiom may be true for technologies from the pencil to the birth control pill, artificial intelligences add another twist: After we launch them, they not only shape us but they also begin to shape themselves. We give them an initial goal, then give them all the data they need to figure out how to accomplish it. From that point forward, we humans no longer fully understand how an A.I. may be processing information or modifying its tactics. The A.I. isnt conscious enough to tell us. Its just trying everything, and hanging on to what works.

Researchers have found, for example, that the algorithms running social media platforms tend to show people pictures of their ex-lovers having fun. No, users dont want to see such images. But, through trial and error, the algorithms have discovered that showing us pictures of our exes having fun increases our engagement. We are drawn to click on those pictures and see what our exes are up to, and were more likely to do it if were jealous that theyve found a new partner. The algorithms dont know why this works, and they dont care. Theyre only trying to maximize whichever metric weve instructed them to pursue. Thats why the original commands we give them are so important. Whatever values we embed efficiency, growth, security, compliance will be the values A.I.s achieve, by whatever means happen to work. A.I.s will be using techniques that no one not even they understand. And they will be honing them to generate better results, and then using those results to iterate further.

We already employ A.I. systems to evaluate teacher performance, mortgage applications, and criminal records, and they make decisions just as racist and prejudicial as the humans whose decisions they were fed. But the criteria and processes they use are deemed too commercially sensitive to be revealed, so we cannot open the black box and analyze how to solve the bias. Those judged unfavorably by an algorithm have no means to appeal the decision or learn the reasoning behind their rejection. Many companies couldnt ascertain their own A.I.s criteria anyway.

As A.I.s pursue their programmed goals, they will learn to leverage human values as exploits. As they have already discovered, the more they can trigger our social instincts and tug on our heartstrings, the more likely we are to engage with them as if they were human. Would you disobey an A.I. that feels like your parent, or disconnect one that seems like your child?

Eerily echoing the rationale behind corporate personhood, some computer scientists are already arguing that A.I.s should be granted the rights of living beings rather than being treated as mere instruments or slaves. Our science fiction movies depict races of robots taking revenge on their human overlords as if this problem is somehow more relevant than the unacknowledged legacy of slavery still driving racism in America, or the 21st-century slavery on which todays technological infrastructure depends.

We are moving into a world where we care less about how other people regard us than how A.I.s do.

Originally posted here:
Artificial Intelligence Will Soon Shape Themselves, and Us - Medium

Companies Will Spend $50 Billion On Artificial Intelligence This Year With Little To Show For It – Forbes

After spending $2.5 billion over five years, Uber is still far from delivering its self-driving vehicles.

As corporate spending on artificial intelligence systems is set to pass $50 billion this year, the vast majority of companies may not be seeing much return on that record investment.

In a survey of more than 3,000 company managers about their AI spend, only 10% reported significant financial benefits from their investment so far, the new report from MIT Sloan Management Review and Boston Consulting Group found.

Gains from the tech havent kept pace with increased adoption, says Shervin Khodabandeh, who led the study and is co-head of BCGs AI business in North America. We are seeing more activity, which also means more investment in technology and data science, Khodabandeh says. But that impact line hasnt really changed.

The results should prove concerning to corporations that continue to pour money into AI projects at a breakneck clip, looking to use the tools for everything from managing contracts to powering home assistants and self-driving cars. More than $50 billion is expected to be invested in AI systems globally this year, according to IDC, up from $37.5 billion in 2019. By 2024, investment is expected to reach $110 billion, IDC forecasts.

But despite the billions invested, failed AI projects have become an increasing factor. IBM has deprioritized its Watson technology after drawing scorn for ventures like one $62 million oncology project that made inaccurate suggestions on cancer treatments. Amazon canned an AI recruitment tool after it showed misogynistic biases. And smaller businesses have found that building the technology is harder than it looks, as supposedly AI-powered virtual assistants and meetings schedulers end up relying on actual humans behind the scenes.

Companies are struggling to deliver on AI projects, Khodabandeh says, because they overspend on technology and data scientists, without implementing changes in the business processes that could benefit from AI a conclusion that echoes a Harvard Business Review report published in June.

Take Uber. Last month, engineers at the ride-hailing company concluded that its self-driving cars couldnt drive more than half a mile before encountering a problem. The programs artificial intelligence still struggles with simple routines and simple maneuvers, per a report in The Information. Part of the reason for the failure, according to an internal memo: competing internal ideas on how to implement the tech.

But with AIs promise of large-scale business savings and improvements, companies arent likely to stop investing in the technology soon. The BCG and MIT researchers found that 57% of companies said they've deployed or piloted their own AI projects, up from 44% in 2018.

For those projects to pay off, Khodabandeh says more AI adopters will need to rethink how the tech is integrated within their businesses. There's clearly a lot of hype, he says. And some of that hype comes out in the data.

Visit link:
Companies Will Spend $50 Billion On Artificial Intelligence This Year With Little To Show For It - Forbes

Imaging and Artificial Intelligence Tools Help Predict Response to Breast Cancer Therapy – On Cancer – Memorial Sloan Kettering

Summary

For breast cancers that have high levels of HER2, advanced MRI scans and artificial intelligence may help doctors make treatment decisions.

For people with breast cancer, biopsies have long been the gold standard for characterizing the molecular changes in a tumor, which can guide treatment decisions. Biopsies remove a small piece of tissue from the tumor so pathologists can study it under the microscope and make a diagnosis. Thanks to advances in imaging technologies and artificial intelligence (AI), however, experts are now able to use the characteristics of the whole tumor rather than the small sample removed during biopsy to assess tumor characteristics.

In a study published October 8, 2020, in EBioMedicine, a team led by experts from Memorial Sloan Kettering report that for breast cancers that have high levels of a protein called HER2 AI-enhanced imaging tools may also be useful for predicting how patients will respond to the targeted chemotherapy given before surgery to shrink the tumor (called neoadjuvant therapy). Ultimately, these tools could help to guide treatment and make it more personalized.

Were not aiming to replace biopsies, says MSK radiologist Katja Pinker, the studys corresponding author. But because breast tumors can be heterogeneous, meaning that not all parts of the tumor are the same, a biopsy cant always give us the full picture.

Because breast tumors can be heterogeneous, meaning that not all parts of the tumor are the same, a biopsy cant always give us the full picture, says breast radiologist Katja Pinker.

The study looked at data from 311 patients who had already been treated at MSK for early-stage breast cancer. All the patients had HER2-positive tumors meaning that the tumors had high levels of the protein HER2, which can be targeted with drugs like trastuzumab (Herceptin). The researchers wanted to see if AI-enhanced magnetic resonance imaging (MRI) could help them learn more about each specific tumors HER2 status.

One goal was to look at factors that could predict response to neoadjuvant therapy in people whose tumors were HER2-positive. Breast cancer experts have generally believed that people with heterogeneous HER2 disease dont do as well, but recently a study suggested they actually did better, says senior author Maxine Jochelson, Director of Radiology at MSKs Breast and Imaging Center. We wanted to find out if we could use imaging to take a closer look at heterogeneity and then use those findings to study patient outcomes.

The MSK team took advantage of AI and radiomics analysis, which uses computer algorithms to uncover disease characteristics. The computer helps revealfeatures on an MRI scan that cant be seen with the naked eye.

In this study, the researchers used machine learning to combine radiomics analysis of the entire tumor with clinical findings and biopsy results. They took a closer look at the HER2 status of the 311 patients, with the aim of predicting their response to neoadjuvant chemotherapy. By comparing the computer models to actual patient outcomes, they were able to verify that the models were effective.

We hope that this will get us to the next level of personalized treatment for breast cancer.

Our next step is to conduct a larger multicenter study that includes different patient populations treated at different hospitals and scanned with different machines, Dr. Pinker says. Im confident that our results will be the same, but these larger studies are very important to do before you can apply these findings to patient treatment.

Once weve confirmed our findings, our goal is to perform risk-adaptive treatment, Dr. Jochelson says. That means we could use it to monitor patients during treatment and consider changing their chemotherapy during treatment if their early response is not ideal.

Dr. Jochelson adds that conducting more frequent scans and using them to guide therapies has improved treatments for people with other cancers, including lymphoma. We hope that this will get us to the next level of personalized treatment for breast cancer, she concludes.

Excerpt from:
Imaging and Artificial Intelligence Tools Help Predict Response to Breast Cancer Therapy - On Cancer - Memorial Sloan Kettering

Artificial intelligence reveals hundreds of millions of trees in the Sahara – Newswise

Newswise If you think that the Sahara is covered only by golden dunes and scorched rocks, you aren't alone. Perhaps it's time to shelve that notion. In an area of West Africa 30 times larger than Denmark, an international team, led by University of Copenhagen and NASA researchers, has counted over 1.8 billion trees and shrubs. The 1.3 million km2 area covers the western-most portion of the Sahara Desert, the Sahel and what are known as sub-humid zones of West Africa.

"We were very surprised to see that quite a few trees actually grow in the Sahara Desert, because up until now, most people thought that virtually none existed. We counted hundreds of millions of trees in the desert alone. Doing so wouldn't have been possible without this technology. Indeed, I think it marks the beginning of a new scientific era," asserts Assistant Professor Martin Brandt of the University of Copenhagen's Department of Geosciences and Natural Resource Management, lead author of the study'sscientific article, now published inNature.

The work was achieved through a combination of detailed satellite imagery provided by NASA, and deep learning -- an advanced artificial intelligence method. Normal satellite imagery is unable to identify individual trees, they remain literally invisible. Moreover, a limited interest in counting trees outside of forested areas led to the prevailing view that there were almost no trees in this particular region. This is the first time that trees across a large dryland region have been counted.

The role of trees in the global carbon budget

New knowledge about trees in dryland areas like this is important for several reasons, according to Martin Brandt. For example, they represent an unknown factor when it comes to the global carbon budget:

"Trees outside of forested areas are usually not included in climate models, and we know very little about their carbon stocks. They are basically a white spot on maps and an unknown component in the global carbon cycle," explains Martin Brandt.

Furthermore, the new study can contribute to better understanding the importance of trees for biodiversity and ecosystems and for the people living in these areas. In particular, enhanced knowledge about trees is also important for developing programmes that promote agroforestry, which plays a major environmental and socio-economic role in arid regions.

"Thus, we are also interested in using satellites to determine tree species, as tree types are significant in relation to their value to local populations who use wood resources as part of their livelihoods. Trees and their fruit are consumed by both livestock and humans, and when preserved in the fields, trees have a positive effect on crop yields because they improve the balance of water and nutrients," explains Professor Rasmus Fensholt of the Department of Geosciences and Natural Resource Management.

Technology with a high potential

The research was conducted in collaboration with the University of Copenhagen's Department of Computer Science, where researchers developed the deep learning algorithm that made the counting of trees over such a large area possible.

The researchers show the deep learning model what a tree looks like: They do so by feeding it thousands of images of various trees. Based upon the recognition of tree shapes, the model can then automatically identify and map trees over large areas and thousands of images. The model needs only hours what would take thousands of humans several years to achieve.

"This technology has enormous potential when it comes to documenting changes on a global scale and ultimately, in contributing towards global climate goals. We are motivated to develop this type of beneficial artificial intelligence," says professor and co-author Christian Igel of the Department of Computer Science.

The next step is to expand the count to a much larger area in Africa. And in the longer term, the aim is to create a global database of all trees growing outside forest areas.

###

FACTS:

Excerpt from:
Artificial intelligence reveals hundreds of millions of trees in the Sahara - Newswise

Artificial intelligence and the antitrust case against Google – VentureBeat

Following the launch of investigations last year, the U.S. Department of Justice (DOJ) together with attorney generals from 11 U.S. states filed a lawsuit against Google on Tuesday alleging that the company maintains monopolies in online search and advertising, and violates laws prohibiting anticompetitive business practices.

Its the first antitrust lawsuit federal prosecutors filed against a tech company since the Department of Justice brought charges against Microsoft in the 1990s.

Back then, Google claimed Microsofts practices were anticompetitive, and yet, now, Google deploys the same playbook to sustain its own monopolies, the complaint reads. For the sake of American consumers, advertisers, and all companies now reliant on the internet economy, the time has come to stop Googles anticompetitive conduct and restore competition.

Attorneys general from no Democratic states joined the suit. State attorneys general Democrats and Republicans alike plan to continue on with their own investigations, signaling that more charges or backing from states might be on the way. Both the antitrust investigation completed by a congressional subcommittee earlier this month and the new DOJ lawsuit advocate breaking up tech companies as a potential solution.

The64-page complaint characterizes Google as a monopoly gatekeeper for the internet and spells out the reasoning behind the lawsuit in detail, documenting the companys beginning at Stanford University in the 1990s alongside deals made in the past decade with companies like Apple and Samsung to maintain Googles dominance. Also key to Googles power and plans for the future is access to personal data and artificial intelligence. In this story, we take a look at the myriad of ways in which artificial intelligence plays a role in the antitrust case against Google.

The best place to begin when examining the role AI plays in Googles antitrust case is online search, which is powered by algorithms and automated web crawlers that scour webpages for information. Personalized search results made possible by the collection of personal data started in 2009, and today Google can search for images, videos, and even songs that people hum. Google dominates the $40 billion online search industry, and that dominance acts like a self-reinforcing cycle: More data leads to more training data for algorithms, defense against competition, and more effective advertising.

General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms, the complaint reads. The additional data from scale allows improved automated learning for algorithms to deliver more relevant results, particularly on fresh queries (queries seeking recent information), location-based queries (queries asking about something in the searchers vicinity), and long-tail queries (queries used infrequently).

Search is now primarily conducted on mobile devices like smartphones or tablets. To build monopolies in mobile search and create scale insurmountable to competitors, the complaint states, Google turned to exclusionary agreements with smartphone sellers like Apple and Samsung as well as revenue sharing with wireless carriers. The Apple-Google symbiosis is in fact so important that losing it is referred to as code red at Google, according to the DOJ filing. An unnamed senior Apple employee corresponding with their counterpart at Google said its Apples vision that the two companies operate as if one company. Today, Google accounts for four out of five web searches in the United States and 95% of mobile searches. Last year, Google estimated that nearly half of all search traffic originated on Apple devices, while 15-20% of Apple income came from Google.

Exclusive agreements that put Google apps on mobile devices effectively captured hundreds of millions of users. An antitrust report referenced these data advantages, stating that Googles anticompetitive conduct effectively eliminates rivals ability to build the scale necessary to compete.

In addition to the DOJ report, the antitrust report Congress released earlier this month frequently cites the network effect achieved by Big Tech companies as a significant barrier to entry for smaller businesses or startups. The incumbents have access to large data sets that give them a big advantage, especially when combined with machine learning and AI, the report reads. Companies with superior access to data can use that data to better target users or improve product quality, drawing more users and, in turn, generating more data an advantageous feedback loop.

Network effects often come up in the congressional report in reference to mobile operating systems, public cloud providers, and AI assistants like Alexa and Google Assistant, which improve their machine learning models through the collection of data like voice recordings.

One potential solution the congressional investigation suggested is better data portability to help small businesses compete with tech giants.

One part of maintaining Googles search monopoly, according to the congressional report, is control of emerging search access points. While Google searches began on desktop computers, mobile is king today, and fast emerging are devices like smartwatches, smart speakers, and IoT devices with AI assistants like Alexa, Google Assistant, and Siri. Virtual assistants are using AI to turn speech into text and predict a users intent, becoming a new battleground. An internal Google document declared voice will become the future of search.

The growth of searches via Amazon Echo devices is why a Morgan Stanley analyst previously suggested Google give everyone in the country a free speaker. In the end, he concluded, it would be cheaper for Google to give away hundreds of millions of speakers than to lose its edge to Amazon.

The scale afforded by Android and native Google apps also appears to be a key part of Google Assistants ability to understand or translate dozens of languages and collect voice data across the globe.

Search is primarily done on mobile devices today. Thats what drives the symbiotic relationship between Apple and Google, where Apple receives 20% of its total revenue from Google in exchange for making Google the de facto search engine on iOS phones, which still make up about 60% of the U.S. smartphone market.

The DOJ suit states that Google is concentrating on Google Nest IoT devices and smart speakers because internet searches will increasingly take place using voice orders. The company wants to control the next popular environment for search queries, the DOJ says, whether it be wearable devices like smartwatches or activity monitors from Fitbit, which Google announced plans to acquire roughly one year ago.

Google recognizes that its hardware products also have HUGE defensive value in virtual assistant space AND combatting query erosion in core Search business. Looking ahead to the future of search, Google sees that Alexa and others may increasingly be a substitute for Search and browsers with additional sophistication and push into screen devices,' the DOJ report reads. Google has also harmed competition by raising rivals costs and foreclosing them from effective distribution channels, such as distribution through voice assistant providers, preventing them from meaningfully challenging Googles monopoly in general search services.

In other words, only Google Assistant can get microphone access for a smartphone to respond to a wake word like Hey, Google, a tactic the complaint says handicaps rivals.

AI like Google Assistant also features prominently in the antitrust report a Democrat-led antitrust subcommittee in Congress released, which refers to AI assistants as efforts to lock consumers into information ecosystems. The easiest way to spot this lock-in is when you consider the fact that Google prioritizes YouTube, Apple wants you to use Apple Music, and Amazon wants users to subscribe to Amazon Prime Music.

The congressional report also documents the recent history of Big Tech companies acquiring startups. It alleges that in order to avoid competition from up-and-coming rivals, companies like Google have bought up startups in emerging fields like artificial intelligence and augmented reality.

If you expect a quick ruling by the DC Circuit Court in the antitrust lawsuit against Google, youll be disappointed that doesnt seem at all likely. Taking the 1970s case against IBM and the Microsoft suit in the 1990s as a guide, antitrust cases tend to take years. In fact, its not outside the realm of possibility that this case could still be happening the next time voters pick a president in 2024.

What does seem clear from language used in both US v Google and the congressional antitrust report is that both Democrats and Republicans are willing to consider separating company divisions in order to maintain competitive markets and a healthy digital economy. Whats also clear is that both the Justice Department and antitrust lawmakers in Congress see action as necessary based in part on how Google treats personal data and artificial intelligence.

See more here:
Artificial intelligence and the antitrust case against Google - VentureBeat

Global Artificial Intelligence of Things Markets 2020-2025: Focus on Technology & Solutions – AIoT Solutions Improve Operational Effectiveness and…

Dublin, Oct. 22, 2020 (GLOBE NEWSWIRE) -- The "Artificial Intelligence of Things: AIoT Market by Technology and Solutions 2020 - 2025" report has been added to ResearchAndMarkets.com's offering.

This AIoT market report provides an analysis of technologies, leading companies and solutions. The report also provides quantitative analysis including market sizing and forecasts for AIoT infrastructure, services, and specific solutions for the period 2020 through 2025. The report also provides an assessment of the impact of 5G upon AIoT (and vice versa) as well as blockchain and specific solutions such as Data as a Service, Decisions as a Service, and the market for AIoT in smart cities.

Many industry verticals will be transformed through AI integration with enterprise, industrial, and consumer product and service ecosystems. It is destined to become an integral component of business operations including supply chains, sales and marketing processes, product and service delivery, and support models.

We see AIoT evolving to become more commonplace as a standard feature from big analytics companies in terms of digital transformation for the connected enterprise. This will be realized in infrastructure, software, and SaaS managed service offerings. More specifically, we see 2020 as a key year for IoT data-as-a-service offerings to become AI-enabled decisions-as-a-service-solutions, customized on a per industry and company basis. Certain data-driven verticals such as the utility and energy services industries will lead the way.

As IoT networks proliferate throughout every major industry vertical, there will be an increasingly large amount of unstructured machine data. The growing amount of human-oriented and machine-generated data will drive substantial opportunities for AI support of unstructured data analytics solutions. Data generated from IoT supported systems will become extremely valuable, both for internal corporate needs as well as for many customer-facing functions such as product life-cycle management.

The use of AI for decision making in IoT and data analytics will be crucial for efficient and effective decision making, especially in the area of streaming data and real-time analytics associated with edge computing networks. Real-time data will be a key value proposition for all use cases, segments, and solutions. The ability to capture streaming data, determine valuable attributes, and make decisions in real-time will add an entirely new dimension to service logic.

In many cases, the data itself, and actionable information will be the service. AIoT infrastructure and services will, therefore, be leveraged to achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics, creating a foundation for IoT Data as a Service (IoTDaaS) and AI-based Decisions as a Service.

The fastest-growing 5G AIoT applications involve private networks. Accordingly, the 5GNR market for private wireless in industrial automation will reach $4B by 2025. Some of the largest market opportunities will be AIoT market IoTDaaS solutions. We see machine learning in edge computing as the key to realizing the full potential of IoT analytics.

Select Report Findings:

Key Topics Covered:

1.0 Executive Summary

2.0 Introduction2.1 Defining AIoT2.2 AI in IoT vs. AIoT2.3 Artificial General Intelligence2.4 IoT Network and Functional Structure2.5 Ambient Intelligence and Smart Lifestyles2.6 Economic and Social Impact2.7 Enterprise Adoption and Investment2.8 Market Drivers and Opportunities2.9 Market Restraints and Challenges2.10 AIoT Value Chain2.10.1 Device Manufacturers2.10.2 Equipment Manufacturers2.10.3 Platform Providers2.10.4 Software and Service Providers2.10.5 User Communities

3.0 AIoT Technology and Market3.1 AIoT Market3.1.1 Equipment and Component3.1.2 Cloud Equipment and Deployment3.1.3 3D Sensing Technology3.1.4 Software and Data Analytics3.1.5 AIoT Platforms3.1.6 Deployment and Services3.2 AIoT Sub-Markets3.2.1 Supporting Device and Connected Objects3.2.2 IoT Data as a Service3.2.3 AI Decisions as a Service3.2.4 APIs and Interoperability3.2.5 Smart Objects3.2.6 Smart City Considerations3.2.7 Industrial Transformation3.2.8 Cognitive Computing and Computer Vision3.2.9 Consumer Appliances3.2.10 Domain Specific Network Considerations3.2.11 3D Sensing Applications3.2.12 Predictive 3D Design3.3 AIoT Supporting Technologies3.3.1 Cognitive Computing3.3.2 Computer Vision3.3.3 Machine Learning Capabilities and APIs3.3.4 Neural Networks3.3.5 Context-Aware Processing3.4 AIoT Enabling Technologies and Solutions3.4.1 Edge Computing3.4.2 Blockchain Networks3.4.3 Cloud Technologies3.4.4 5G Technologies3.4.5 Digital Twin Technology and Solutions3.4.6 Smart Machines3.4.7 Cloud Robotics3.4.8 Predictive Analytics and Real-Time Processing3.4.9 Post Event Processing3.4.10 Haptic Technology

4.0 AIoT Applications Analysis4.1 Device Accessibility and Security4.2 Gesture Control and Facial Recognition4.3 Home Automation4.4 Wearable Device4.5 Fleet Management4.6 Intelligent Robots4.7 Augmented Reality Market4.8 Drone Traffic Monitoring4.9 Real-time Public Safety4.10 Yield Monitoring and Soil Monitoring Market4.11 HCM Operation

5.0 Analysis of Important AIoT Companies5.1 Sharp5.2 SAS5.3 DT425.4 Chania Tech Giants: Baidu, Alibaba, and Tencent5.4.1 Baidu5.4.2 Alibaba5.4.3 Tencent5.5 Xiaomi Technology5.6 NVidia5.7 Intel Corporation5.8 Qualcomm5.9 Innodisk5.10 Gopher Protocol5.11 Micron Technology5.12 ShiftPixy5.13 Uptake5.14 C3 IoT5.15 Alluvium5.16 Arundo Analytics5.17 Canvass Analytics5.18 Falkonry5.19 Interactor5.20 Google5.21 Cisco5.22 IBM Corp.5.23 Microsoft Corp.5.24 Apple Inc.5.25 Salesforce Inc.5.26 Infineon Technologies AG5.27 Amazon Inc.5.28 AB Electrolux5.29 ABB Ltd.5.30 AIBrian Inc.5.31 Analog Devices5.32 ARM Limited5.33 Atmel Corporation5.34 Ayla Networks Inc.5.35 Brighterion Inc.5.36 Buddy5.37 CloudMinds5.38 Cumulocity GmBH5.39 Cypress Semiconductor Corp5.40 Digital Reasoning Systems Inc.5.41 Echelon Corporation5.42 Enea AB5.43 Express Logic Inc.5.44 Facebook Inc.5.45 Fujitsu Ltd.5.46 Gemalto N.V.5.47 General Electric5.48 General Vision Inc.5.49 Graphcore5.50 H2O.ai5.51 Haier Group Corporation5.52 Helium Systems5.53 Hewlett Packard Enterprise5.54 Huawei Technologies5.55 Siemens AG5.56 SK Telecom5.57 SoftBank Robotics5.58 SpaceX5.59 SparkCognition5.60 STMicroelectronics5.61 Symantec Corporation5.62 Tellmeplus5.63 Tend.ai5.64 Tesla5.65 Texas Instruments5.66 Thethings.io5.67 Veros Systems5.68 Whirlpool Corporation5.69 Wind River Systems5.70 Juniper Networks5.71 Nokia Corporation5.72 Oracle Corporation5.73 PTC Corporation5.74 Losant IoT5.75 Robert Bosch GmbH5.76 Pepper5.77 Terminus5.78 Tuya Smart

6.0 AIoT Market Analysis and Forecasts 2020 - 20256.1 Global AIoT Market Outlook and Forecasts6.1.1 Aggregate AIoT Market 2020 - 20256.1.2 AIoT Market by Infrastructure and Services 2020 - 20256.1.3 AIoT Market by AI Technology 2020 - 20256.1.4 AIoT Market by Application 2020 - 20256.1.5 AIoT in Consumer, Enterprise, Industrial, and Government 2020 - 20256.1.6 AIoT Market in Cities, Suburbs, and Rural Areas 2020 - 20256.1.7 AIoT in Smart Cities 2020 - 20256.1.8 IoT Data as a Service Market 2020 - 20256.1.9 AI Decisions as a Service Market 2020 - 20256.1.10 Blockchain Support of AIoT 2020 - 20256.1.11 AIoT in 5G Networks 2020 - 20256.2 Regional AIoT Markets 2020 - 2025

7.0 Conclusions and Recommendations7.1 Advertisers and Media Companies7.2 Artificial Intelligence Providers7.3 Automotive Companies7.4 Broadband Infrastructure Providers7.5 Communication Service Providers7.6 Computing Companies7.7 Data Analytics Providers7.8 Immersive Technology (AR, VR, and MR) Providers7.9 Networking Equipment Providers7.10 Networking Security Providers7.11 Semiconductor Companies7.12 IoT Suppliers and Service Providers7.13 Software Providers7.14 Smart City System Integrators7.15 Automation System Providers7.16 Social Media Companies7.17 Workplace Solution Providers7.18 Enterprise and Government

For more information about this report visit https://www.researchandmarkets.com/r/aw2mh9

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Read more:
Global Artificial Intelligence of Things Markets 2020-2025: Focus on Technology & Solutions - AIoT Solutions Improve Operational Effectiveness and...

The Military’s Mission: Artificial Intelligence in the Cockpit – The Cipher Brief

The Defense Advanced Research Projects Agency (DARPA) recently hosted the AlphaDogfight Trials putting artificial intelligence technology from eight different organizations up against human pilots. In the end, the winning AI, made by Heron Systems, faced off against a human F-16 pilot in a simulated dogfight with the AI system scoring a 5-0 victory against the human pilot.

The simulation was part of an effort to better understand how to integrate AI systems in piloted aircraft in part, to increase the lethality of the Air Force. The event also re-launched questions about the future of AI in aviation technology and how human pilots will remain relevant in an age of ongoing advancements in drone and artificial intelligence technology.

The Background:

The Experts:

The Cipher Brief spoke with our expert, General Philip M. Breedlove (Ret.) and Tucker Cinco Hamilton to get their take on the trials and the path ahead for AI in aviation.

General Philip M. Breedlove, Former Supreme Allied Commander, NATO & Command Pilot

Gen. Breedlove retired as NATO Supreme Allied Commander and is a command pilot with 3,500 flying hours, primarily in the F-16. He flew combat missions in Operation Joint Forge/Joint Guardian. Prior to his position as SACEUR, he served as Commander, U.S. Air Forces in Europe; Commander, U.S. Air Forces Africa; Commander, Air Component Command, Ramstein; and Director, Joint Air Power Competence Centre, Kalkar, Germany.

Lt. Col. Tucker Cinco Hamilton, Director, Dept. of the Air Force AI Accelerator at MIT

Cinco Hamilton is Director, Department of the Air Force-MIT Accelerator and previously served as Director of the F-35 Integrated Test Force at Edwards AFB, responsible for the developmental flight test of the F-35. He has logged over 2,100 hours as a test pilot in more than 30 types of aircraft.

How significant was this test between AI and human pilots?

Tucker Cinco Hamilton: It was significant along the same lines as whenDeepMind Technologies AlphaGo won the game Go against a grand-master. It was animportant moment that revealedtechnological capability, but it must be understood in the context of the demonstration. Equally, it did not prove that fighter pilots are no longer needed on the battlefield. What I hope people tookaway from the demonstration was that AI/ML technology is immensely capable andvitally important to understand and cultivate; that with an ethical and focused developmental approach we can bolster the human-machine interaction.

General Breedlove: Technology is moving fast, but in some cases, policy might not move so fast. For instance, technology exists now to put sensors on these aircrafts that are better than the human eye. They can see better. They can see better in bad conditions. And especially when you start to layer a blend of visual, radar, and infrared sensing together, it is my belief that we can actually achieve a more reliable discerning capability than the human eye. I do not believe that our limitations are going to be on the ability of the machine to do what it needs to do. The real limitations are going to be on what we allow it to do in a policy format.

How will fighter pilots of the future think about data and technology in the cockpit?

General Breedlove: Some folks believe that were never going to move forward with this technology because fighter pilots dont want to give up the control. I think for most young fighter pilots and for most of the really savvy older fighter pilots, thats not true. We want to be effective, efficient, lethal killing machines when our nation needs to us to be. If we can get into an engagement where we can use these capabilities to make us more effective and more efficient killing machines, then I think youre going to see people, young people, and even people like me, absolutely embracing it.

Tucker Cinco Hamilton: I think the future fighter aircraft will be manned, yet linked into AI/ML powered autonomous systems that bolster the fighter pilots battlefield safety, awareness, and capability. The future I see is one in which an operator is still fully engaged with battlefield decision making, yet being supported by technology through human-machine teaming.

As we develop and integrate AI/ML capability we must do so ethically. This is an imperative. Our warfighter and our society deserve transparent, ethically curated, and ethically executed algorithms. In addition, data must be transparently and ethicallycollected and used.BeforeAI/ML capability fullymakes its way into combat applicationswe need to have established a strong and thoughtful ethicalfoundation.

Looking Ahead:

General Breedlove: Humans are training machines to do things, and machines are executing what theyve been trained to do, as opposed to actually making independent, non-human aided decisions. I do believe were in a timeframe now where there may be a person in the loop in certain parts of the engagement, but were probably not very far off from a point in time when the human says, Yep, thats the target. Hit it. Or the human takes the aircraft to a point where only the bad element is in front of it, and the decision concerning collateral damage has already been made, and then the human turns it completely over. But to the high-end extreme of, launch an airplane and then see what happens next, kind of scenario, I think were still a long way away from that. I think there are going to be humans in the engagement loop for a long time.

Tucker Cinco Hamilton: Autonomous systems are here to stay. Whether helping manage our engine operation or saving us from ground collision with the Automatic Ground Collision Avoidance System. As aircraft software continues to become more agile, these autonomous systems will play a part in currently fielded physical systems. This type of advancement is important and needed. However, AI/ML powered autonomous systems havelimitations, and thats exactly where the operator comes in. We need to focus on creatingcapability that bolsters our fighter pilots,allowing them to best prosecute the attack, not remove them from the cockpit. If that is through keeping them safe, or pinpointing/identifying the correct target, helping alert them of incoming threats, or garnering knowledge of the battlefield its all about human-machine teaming. That teaming isexactly what the recent DARPA demonstration was about, proving that an AI powered system can help in situations even as dynamic as dogfighting.

Cipher Brief Intern Ben McNally contributed research for this report

Read more from General Breedlove (Ret.) on the future of AI in the cockpit exclusively in The Cipher Brief

Read more expert-driven national security insight, perspective and analysis in The Cipher Brief

See original here:
The Military's Mission: Artificial Intelligence in the Cockpit - The Cipher Brief

Artificial intelligence anticipates how instruments are used during surgery – Innovation Origins

In the operating theater of the future, computer-based assistance systems will make work processes simpler and safer and thereby play a much greater role than today. However, such support features are only possible if computers are able to anticipate important events in the operating room and provide the right information at the right time, explains Prof. Stefanie Speidel. She is head of the Department of Translational Surgical Oncology at the National Center for Tumor Diseases Dresden (NCT/UCC) in Germany.

Together with the Centre for Tactile Internet with Human-in-the-loop (CeTI) at TU Dresden, she has developed a method that uses artificial intelligence (AI) to enable computers to anticipate the usage of surgical instruments before they are used.

This kind of system does not just provide an important basis for the use of autonomous robotic systems that could take over simple minor tasks in the operating theater, such as blood aspiration. It could also issue early warnings of complications if these are inherent to the use of a particular instrument. Furthermore, it would increase efficiency where preparing instruments is concerned. However, our vision is not to replace the surgeon with a robot or other assistants. The intelligent systems should merely act as a helping hand and lighten the load for both the doctor and the entire surgical team, says Prof. Jrgen Weitz, Managing Director at NCT/UCC and Director of the Clinic for Visceral, Thoracic and Vascular Surgery at the University Hospital Carl Gustav Carus in Dresden.

In order to teach computers how to anticipate the use of surgical instruments on a situation-specific basis a few minutes before they are actually put to use, scientists at NCT/UCC and CeTI used an artificial neural network that mimics the human ability to learn by example. By using a continuous analysis of video images of a surgical procedure, the usage of certain instruments was shown a few minutes before they were actually used. They then trained the neural network with 60 videos of gall bladder removal surgery. These were recorded by default with a laparoscope in the abdomen. Five different instruments were highlighted in these videos.

Afterwards the neural network had to demonstrate its knowledge on 20 more videos without any markers. The scientists were able to verify that the system had made important advances in learning. Plus, in many cases it was able to correctly anticipate how the instruments would be used.

Compared to other methods, this neural network proved to be much more suitable for practical applications. Consequently, this also means that is capable of solving complex tasks. Other methods treat the timing of a specific situation as a matter of routine and the network just needs to decide between various possible situations. In contrast, we have been able to show that an artificial neural network with specific adaptations and a suitably formulated mathematical function is capable of making sensible assessments about the type of instrument that should be selected and the time frame of its application with a minimum of coding effort, says Dominik Rivoir from the Department of Translational Surgical Oncology at NCT/UCC, first author of the study presented at the International Conference on Medical Image Computing & Computer-Assisted Intervention (MICCAI).

Next, the scientists want to refine the method and add more data sets to the neural network. One focus in this is on surgical videos that show more severe bleeding. Using this image data, the network should be able to learn even better when hemorrhages need to be aspirated with a special instrument. In the presented study, the researchers were already able to show that the network interpreted, for example, the appearance of a clamp for clamping a blood vessel with a high degree of accuracy as a characteristic. This way, it was abe to anticipate the use of scissors soon afterward. In future, this could serve as a basis for timing the use of robot-guided aspiration instruments or for anticipating complications.

The National Center for Tumor Diseases Dresden (NCT/UCC) is a joint venture of the German Cancer Research Center (DKFZ), the University Hospital Carl Gustav Carus Dresden, the Medical Faculty Carl Gustav Carus of the TU Dresden, and the Helmholtz Center Dresden-Rossendorf (HZDR).

Title image: Autonomous robotic systems and other intelligent assistance systems will provide enhanced support for the surgical team in the future. NCT/UCC/Andr Wirsig

Read the original:
Artificial intelligence anticipates how instruments are used during surgery - Innovation Origins

Visual Artificial Intelligence on the Edge of Revolutionizing Retail – Loss Prevention Magazine

We are laser-focused on continuous improvements to customers experience across our stores. By leveraging Everseens Visual AI and machine-learning technology, were not only able to remove friction for the customer, but we can also remove controllable costs from the business and redirect those resources to improving the customer experience even more. Mike Lamb, LPC, Krogers VP of Asset Protection

This post was inspired by a recent Kroger article announcing the deployment of visual artificial intelligence (AI) in 2,500 stores and new IHL Group edge computing research. Multiple technological trends have been converging for some time, and their combination is leading to transformative store operations improving solutions.

By 2021, one billion video cameras will be deployed around the world. Endless possibilities in creating immersive consumer experiences emerge when artificial intelligence and machine learning are coupled with these visual data gathering devices.

COVID-19 has become a disruptive accelerator of digital transformation trends that were already underway. It takes 66 days or approximately two months to form a new permanent habit. New shopping journey habits have emerged during the pandemic that will require intensified analysis of millions of data inputs to both protect transactions and remove negative experience friction.

What are some of the leading visual AI or computer vision applications today? In retail, whats the return on investment (ROI)? What makes these technologies critical to the future of retail?

Computer vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objectsand then react to what they see.

This visual AI technology delivers valuable insights that dramatically improves decision making capabilities. My latest edition of the continuously updated Disruptive Future of Retail presentation includes this chart summarizing selected innovative applications.

In a pre-pandemic research report published by Fortune Business Insights, the retail AI market size was valued at $2,306.8 million in 2018 and will grow to $23,426.3 million by 2026. Computer vision and machine learning are key innovation drivers for this segment.

Fully expect the market size and the value of visual AI applications to increase because of COVID-19.

Data is exploding in the retail industry. Walmart as one example generates more than 1 million customer transactions every hour, feeding databases estimated at more than 2.5 petabytesequivalent to 167 times the books in the US Library of Congress.

In all industries, Internet-of-Things (IoT) connected devices are adding substantial amounts of data to the mix. In 2020, machine-generated data will account for over 40 percent of internet data.

The major cloud providers are already coming to the conclusion to distribute workloads to the appropriate edge where they run best. As the IHL Group points out in their latest research, edge computing is critical to retails success in this decade. Example applications that they point to in their analysis are the very important new shopping journeys that have been accelerated by the pandemic. Note the margin challenges shown above when these solutions are not optimized.

Edge system architecture delivers substantial margin improvement benefits to these new retail services. Visual artificial intelligence in the cloud and at the edge plus deployment of instore sensors will dramatically improve analysis and decision-making capabilities throughout the physical store.

The bottom line is that the retailers that not only survive and thrive in the next decade will be those that are able to apply artificial and machine learning to operational data at the store level. Yes, e-commerce is a key part of retails growth, but the key advantage that retailers have over pure-play e-commerce competitors is the stores and proximity to the customer.

The last two years have been very rewarding in working with leading retailers and technology providers in driving the future of retail.

Having spent a substantial portion of my career in point-of-sale, it is still today one of the areas that I follow as it is often that last moment of truth in engaging the consumer for both positive and negative results.

Applying visual artificial intelligence at point-of-sale is already delivering substantial positive results. In deployments protecting over $400 billion in retailers revenue across 75,000-plus checkout lanes, the average sales uplift has been 0.5 percent and the margin increase a substantial 20 percent.

POS devices are only the beginning in whats possible in terms of measurable operational improvements. The future of retail includes digitally supported leadership branding coupled with hyper-personalized immersive consumer experiences across the entire store.

Visual AI and edge computing are critical technologies that will deliver frictionless commerce and optimize consumer journeys whose importance has dramatically increased because of COVID-19. We are on the edge of revolutionizing the future of retail.

For additional retail, technology, and leadership information, visit http://www.tonydonofrio.com.

Read the original post:
Visual Artificial Intelligence on the Edge of Revolutionizing Retail - Loss Prevention Magazine

Helios Visions Partners with Thornton Tomasetti’s T2D2 to Provide Artificial Intelligence-Powered Drone Solution for Facade Inspection – PRNewswire

CHICAGO, Oct. 20, 2020 /PRNewswire/ --Drone services company Helios Visions (https://www.heliosvisions.com) has joined forces with T2D2 (http://www.t2d2.ai), a software as a service (SaaS) platform that uses artificial intelligence (AI) to identify and assess damage and deterioration to building envelopes and structures to provide AI-powered drone facade inspection services.

Together, Helios Visions and T2D2 will provide a robust end-to-end solution for facade condition assessment. Using the latest in drone and AI technology, the program helps support critical inspections and significantly enhances visual inspections. It also makes it easier, faster, safer and less costly to inspect structures.

"The use of drones for high-rise building faade inspections is faster and can be as much as 50% cheaper than traditional methods, which require expensive scaffolding, drops and lifts," Helios Visions Co-founder Ted Parisot said. "WithT2D2, wecan streamline thefacade inspection process, and greatly improve planning and decision-making for building owners and property managers. More frequent assessment of building conditions can increase safety and decrease repair costs by spotting problems before they require expensive and invasive solutions."

T2D2, developed within Thornton Tomasetti's CORE studio incubator and commercialized through the firm's TTWiiN accelerator, uses data from Thornton Tomasetti's more than 50 years of building inspection and forensic investigation work as well as detailed drone imagery provided by Helios Visions.

"The detailed drone images provided by Helios Visions allow T2D2's artificial intelligence programs to quickly and accurately identify any issues that may exist in a building's facade. We are excited for the ongoing partnership between T2D2 and Helios Visions, which will enable the AI program to continuously learn as more drone photometry is fed into the system," said Thornton Tomasetti Director of CORE AI and T2D2 Founder and CEO Badri Hiriyur.

In late September, T2D2 was one of four winners in the New York City Department of Buildings' first-ever "Hack the Building Code" Innovation Challenge, which was created to highlight ideas on how to improve building safety and modernize the development process in New York City.

About Helios Visions

Helios Visions is a safety-oriented drone services company specializing indrone based facadeinspection,drone mapping,drone photos and video, and recently became the first drone servicescompany in Chicago to receive an FAA waiver to fly over people. Helios Visions is a member of the

CompTIA Drone Advisory Counciland is fully compliant with FAA drone regulations with an extensive portfolioof successful client projects. Additional information: call +1 (312) 999-0071, visitHelios Visions

About T2D2

T2D2 is a self-learning, AI-based software-as-a-service platform that automatically detects visible damage in a variety of building materials. It expeditescondition assessments, saving time and money, and allows for more frequent assessments to detect and repair damage before it escalates.T2D2's superpowered algorithms were trained onThornton Tomasetti's massive multi-year forensics database. For more information, go to T2D2.ai,or call +1 917-661-7800

About Thornton Tomasetti

Thornton Tomasetti applies engineering and scientific principles to solve the world's challenges starting with yours. An independent organization of creative thinkers and innovative doers collaborating from offices worldwide, our mission is to bring our clients' ideas to life and, in the process, lay the groundwork for a better, more resilient future. For more information visit http://www.ThorntonTomasetti.com or connect with us on LinkedIn, Twitter, Instagram, Facebook, Vimeo or YouTube.

Media Contact:Ted Parisot+1-312-999-0071[emailprotected]

SOURCE Helios Visions Drone Services

Helios Visions

The rest is here:
Helios Visions Partners with Thornton Tomasetti's T2D2 to Provide Artificial Intelligence-Powered Drone Solution for Facade Inspection - PRNewswire