This Artificial Intelligence Stock Raised Its Dividend on "Black Thursday" – Motley Fool

As many now know, last Thursday was an historic day in the stock market. On March 13, 2020, the S&P 500 plunged 9.5% in a single day, the worst daily drop since "Black Monday" in 1987. The plunge came the day after President Trump delivered an underwhelming speech that included a European travel ban. However, stocks rallied on Friday after news of more government stimulus, emergency measures to boost testing, and the purchasing of oil for the country's strategic reserve. Negotiations for a comprehensive support package for the economy are also ongoing.

However, one tech company was tuning out the noise. Semiconductor equipment maker Applied Materials (NASDAQ:AMAT) decided to announce an increase in its dividend on the exact same day the market went into freefall. Is that a sign of confidence, or foolishness?

Image source: Getty Images.

Applied Materials announced that it would raise its quarterly dividend by a penny, from $0.21 to $0.22, a 4.8% boost. Applied's dividend yield is now 1.86%, but that's with a very modest 27.5% payout ratio. The higher dividend will be paid out on June 11, to shareholders of record as of May 21. CEO Gary Dickerson said: "We are increasing the dividend based on our strong cash flow performance and ongoing commitment to return capital to shareholders. ... We believe the AI-Big Data era will create exciting long-term growth opportunities for Applied Materials."

Semiconductors and semiconductor equipment companies have historically been known to be cyclical parts of the tech industry. However, it appears Applied Materials believes the overarching trends for faster and smarter semiconductors should help the company power through a near-term economic disruption. As chip-makers make smaller and more advanced chips, Applied's machines are a necessary expenditure.

But can the long-term trends buffer the company in a times of a potential global recession?

It should be known that the semiconductor industry was already in a downturn last year in 2019, and was beginning to come out of it in early 2020. For Applied, last quarter's results exceeded the high end of its previous guidance, with revenue up 11% and earnings per share up 21%.On Feb. 12, management also guided for solid sequential growth in Q2 even while lowering its prior numbers by $300 million because of coronavirus as of that date.

On a Feb. 12 conference call with analysts, Dickerson reiterated that optimism:

We believe we can deliver strong double-digit growth in our semiconductor business this year as our unique solutions accelerate our customers' success in the AI-Big Data era... our current assessment is that the overall impact for fiscal 2020 will be minimal. However, with travel and logistics restrictions, we do expect changes in the timing of revenues during the year. We are actively managing the situation in collaboration with our customers and suppliers.

While many businesses across the world have seen severe interruptions, it's unclear if the chip industry will be affected as much as others, despite its reputation for cyclicality. While consumer-related electronics may take a temporary hit to demand, a more stay-at-home economy means the need for faster connections, which could actually increase demand for servers and base stations.

Memory chip research website DrameXchange released a report on March 13, outlining its current projections for the DRAM and NAND flash industries as of March 1, along with an updated "bear case" scenario should the coronavirus crisis escalate into a global recession, which was updated on March 12.

Category

Current 2020 Projections

Bear Case 2020 Projections

Notebook computer shipments

(2.6%)

(9%)

Server shipments

5.1%

3.1%

Smartphone shipments

(3.5%)

(7.5%)

DRAM price growth

30%

20%

NAND flash price growth

15%

(5%)

Data source: DrameXchange.

Notice that the enterprise-facing server industry looks poised to withstand a potential severe downturn much better than consumer-facing notebook or smartphone industry. In addition, DRAM prices are poised to increase in 2020 even in a recession, as prices had already crashed last year and the industry cut back on capacity. NAND flash had an earlier downturn than DRAM, and was already beginning to come out of it, so it has more potential with a decline in pricing.

In addition, the largest global foundry Taiwan Semiconductor (NYSE:TSM), just said on March 11 that its capacity for leading-edge 5nm chip production was already "fully booked," and that volume production would begin in April. That indicates continued strong demand for leading-edge logic chips.

So while there may be some more softness in certain parts of the chip industry, there are still relatively strong segments as well. Therefore, Applied may not face revenue declines in 2020, but rather a mere absence of previously forecast growth. Yet even if that happens, growth will likely be deferred to 2021, not totally lost, as eventually the demand for chips will increase.

After its decline, Applied Materials stock trades at just 17 times trailing earnings, and just 14.7 times projected 2020 earnings, though 2020 projections may come down. Still, that's a reasonable price to pay for Applied, especially in a zero-interest rate environment. The company has just as much cash as debt, and its recent dividend raise on the market's darkest day in recent history shows long-term confidence. Risk-tolerant investors with a long enough time horizon thus may want to give Applied -- and the entire chip sector -- a look after the dust settles.

Read the rest here:
This Artificial Intelligence Stock Raised Its Dividend on "Black Thursday" - Motley Fool

3 AI ETFs Changing The World – Investorplace.com

A cornerstone of the technology that is shifting the way we go about our lives is artificial intelligence (AI). Also known as machine intelligence or machine learning, AI is the development of computer-driven technology used to perform functions and tasks that previously required human intelligence. That said, AI ETFs are reaping the benefits.

Within the sprawling AI universe, there are four pillars: reactive machines, limited memory, theory of mind and self-awareness. An example of reactive machines would be the famous Deep Blue chess-playing supercomputer from IBM (NYSE:IBM) while autonomous and self-driving vehicles would be examples of technologies in the limited memory category.

Everyday applications of AI include Apples (NASDAQ:AAPL) SIRI, Alphabet (NASDAQ:GOOG, NASDAQ:GOOGL) search algorithim and Amazons (NASDAQ:AMZN) Alexa.

Those are basic forms of AI, but they serve as evidence of the markets growth and utility. Investors can harness those trends and more by taking advantage of the opportunities in AI ETFs.

That said, lets take a look at a few.

Source: Shutterstock

Expense Ratio: 0.75% per year, or $75 on a $10,000 investment

For investors looking for disruptive technology exposure, the actively managed ARK Innovation ETF (NYSEARCA:ARKK) fits the bill. The fund has a wide reach that encompasses not just pure AI, but industries using this next generation technology.

ARKK companies run the gamut of genomic firms, fintech providers, next generation internet (shared work and related infrastructure) and industrial innovation, among others. Like some other ARK funds, ARKK is know for its large weight to Tesla (NASDAQ:TSLA), which is more than 10%. However, it features plenty of other high fliers with dominant positioning in their respective markets, including Square (NYSE:SQ) and Illumina (NASDAQ:ILMN).

Moreover, some of ARKKs allure as an AI ETF is realized through its exposure to the deep-learning market a truly compelling long-term trend.

In fact, ARK believes that deep learning will be more impactful than the Internet:

The Internet created roughly $10 trillion in global equity market capitalization in 20 years. We believe that deep learning will have 3x that impact, adding $30 trillion to global equity markets over the next two decades.

Source: Shutterstock

Expense Ratio: 0.68% per year

The Global X Robotics & Artificial Intelligence ETF (NASDAQ:BOTZ) is an established giant in the world of AI ETFs with over $1 billion in assets under management and a track record spanning nearly four years.

The fund holds 38 stocks and its top holding is Nvidia (NASDAQ:NVDA), a name with deep AI credibility. That stock accounts for the bulk of the semiconductor exposure in BOTZ. Underscoring this funds diversity, BOTZ features allocations to 14 industry groups, including chip makers.

Importantly, BOTZ provides exposure to increasing efficiencies in the AI universe. In turn, these are widely viewed as a vital long-term driver of AI investment outcomes.

In the past, training robotics was laborious and required time, capital, and engineering expertise, but AI simulators are becoming increasingly accurate at transferring learning to real world applications, according to Global X research. These simulators can run thousands of iterative processes in seconds, creating vast amounts of training data.

Source: Shutterstock

Expense Ratio: 0.40%

The Defiance Quantum ETF (NYSEARCA:QTUM) is one of the premier AI ETFs when it comes to accessing the deep and machine learning themes. The funds underlying benchmark the BlueStar Quantum Computing and Machine Learning Index provides robust exposure to those markets.

Home to 60 stocks, QTUMs index gives the fund a deeper bench than many competing AI ETFs. QTUM itself has 84 holdings.

QTUM components are involved in quantum computing, which data indicates QTUMs exposure to this burgeoning theme could be a positive long-term driver.

The global commercial quantum computing market is expected to reach $1.3 billion by 2027 at a compound annual growth rate (CAGR) of 52.9% from 2022 to 2027 and $161 million by 2022 from $33.0 million in 2017 at a CAGR of 37.3% for the period 2017-2022, notes BCC Research.

Todd Shriber has been an InvestorPlace contributor since 2014. As of this writing, he did not hold a position in any of the aforementioned securities.

Read more:
3 AI ETFs Changing The World - Investorplace.com

The Evolution of Artificial Intelligence and Future of National Security – The National Interest

Artificial intelligence is all the rage these days. In the popular media, regular cyber systems seem almost passe, as writers focus on AI and conjure up images of everything from real-life Terminator robots to more benign companions. In intelligence circles, Chinas uses of closed-circuit television, facial recognition technology, and other monitoring systems suggest the arrival of Big Brotherif not quite in 1984, then only about forty years later. At the Pentagon, legions of officers and analysts talk about the AI race with China, often with foreboding admonitions that the United States cannot afford to be second in class in this emerging realm of technology. In policy circles, people wonder about the ethics of AIsuch as whether we can really delegate to robots the ability to use lethal force against Americas enemies, however bad they may be. A new report by the Defense Innovation Board lays out broad principles for the future ethics of AI, but only in general terms that leave lots of further work to still be done.

What does it all really mean and is AI likely to be all its cracked up to be? We think the answer is complex and that a modest dose of cold water should be thrown on the subject. In fact, many of the AI systems being envisioned today will take decades to develop. Moreover, AI is often being confused with things it is not. Precision about the concept will be essential if we are to have intelligent discussions about how to research, develop, and regulate AI in the years ahead.

AI systems are basically computers that can learn how to do things through a process of trial and error with some mechanism for telling them when they are right and when they are wrongsuch as picking out missiles in photographs, or people in crowds, as with the Pentagon's "Project Maven"and then applying what they have learned to diagnose future data. In other words, with AI, the software is built by the machine itself, in effect. The broad computational approach for a given problem is determined in advance by real old-fashioned humans, but the actual algorithm is created through a process of trial and error by the computer as it ingests and processes huge amounts of data. The thought process of the machine is really not that sophisticated. It is developing artificial instincts more than intelligenceexamining huge amounts of raw data and figuring out how to recognize a cat in a photo or a missile launcher on a crowded highway rather than engaging in deep thought (at least for the foreseeable future).

This definition allows us quickly to identify some types of computer systems that are not, in fact, AI. They may be important, impressive, and crucial to the warfighter but they are not artificial intelligence because they do not create their own algorithms out of data and multiple iterations. There is no machine learning involved, to put it differently. As our colleague, Tom Stefanick, points out, there is a fundamental difference between advanced algorithms, which have been around for decades (though they are constantly improving, as computers get faster), and artificial intelligence. There is also a difference between an autonomous weapons system and AI-directed robotics.

For example, the computers that guide a cruise missile or a drone are not displaying AI. They follow an elaborate, but predetermined, script, using sensors to take in data and then putting it into computers, which then use software (developed by humans, in advance) to determine the right next move and the right place to detonate any weapons. This is autonomy. It is not AI.

Or, to use an example closer to home for most people, when your smartphone uses an app like Google Maps or Waze to recommend the fastest route between two points, this is not necessarily, AI either. There are only so many possible routes between two places. Yes, there may be dozens or hundredsbut the number is finite. As such, the computer in your phone can essentially look at each reasonable possibility separately, taking in data from the broader network that many other peoples phones contribute to factor traffic conditions into the computation. But the way the math is actually done is straightforward and predetermined.

Why is this important? For one thing, it should make us less breathless about AI, and see it as one element in a broader computer revolution that began in the second half of the twentieth century and picked up steam in this century. Also, it should help us see what may or may not be realistic and desirable to regulate in the realm of future warfare.

The former vice chairman of the joint chiefs of staff, Gen. Paul Selva, has recently argued that the United States could be about a decade away from having the capacity to build an autonomous robot that could decide when to shoot and whom to killthough he also asserted that the United States had no plans actually to build such a creature. But if you think about it differently, in some ways weve already had autonomous killing machines for a generation. That cruise missile we discussed above has been deployed since the 1970s. It has instructions to fly a given route and then detonate its warhead without any human in the loop. And by the 1990s, we knew how to build things like skeet submunitions that could loiter over a battlefield and look for warm objects like tanksusing software to decide when to then destroy them. So the killer machine was in effect already deciding for itself.

Even if General Selva's terminator is not built, robotics will in some cases likely be given greater decisionmaking authority to decide when to use force, since we have in effect already crossed over this threshold. This highly fraught subject requires careful ethical and legal oversight, to be sure, and the associated risks are serious. Yet the speed at which military operations must occur will create incentives not to have a person in the decisionmaking loop in many tactical settings. Whatever the United States may prefer, restrictions on automated uses of violent force would also appear relatively difficult to negotiate (even if desirable), given likely opposition from Russia and perhaps from other nations, as well as huge problems with verification.

For example, small robots that can operate as swarms on land, in the air or in the water may be given certain leeway to decide when to operate their lethal capabilities. By communicating with each other, and processing information about the enemy in real-time, they could concentrate attacks where defenses are weakest in a form of combat that John Allen and Amir Husain call hyperwar because of its speed and intensity. Other types of swarms could attack parked aircraft; even small explosives, precisely detonated, could disable wings or engines or produce secondary and much larger explosions. Many countries will have the capacity to do such things in the coming twenty years. Even if the United States tries to avoid using such swarms for lethal and offensive purposes, it may elect to employ them as defensive shields (perhaps against North Korean artillery attack against Seoul) or as jamming aids to accompany penetrating aircraft. With UAVs that can fly ten hours and one hundred kilometers now costing only in the hundreds of thousands of dollars, and quadcopters with ranges of a kilometer more or less costing in the hundreds of dollars, the trendlines are clearand the affordability of using many drones in an organized way is evident.

Where regulation may be possible, and ethically compelling, is limiting the geographic and temporal space where weapons driven by AI or other complex algorithms can use lethal force. For example, the swarms noted above might only be enabled near a ship, or in the skies near the DMZ in Korea, or within a small distance of a military airfield. It may also be smart to ban letting machines decide when to kill people. It might be tempting to use facial recognition technology on future robots to have them hunt the next bin Laden, Baghdadi, or Soleimani in a huge Mideastern city. But the potential for mistakes, for hacking, and for many other malfunctions may be too great to allow this kind of thing. It probably also makes sense to ban the use of AI to attack the nuclear command and control infrastructure of a major nuclear power. Such attempts could give rise to use them or lose them fears in a future crisis and thereby increase the risks of nuclear war.

We are in the early days of AI. We cant yet begin to foresee where its going and what it may make possible in ten or twenty or thirty years. But we can work harder to understand what it actually isand also think hard about how to put ethical boundaries on its future development and use. The future of warfare, for better or for worse, is literally at stake.

Retired Air Force Gen. Lori Robinson is a nonresident senior fellow on the Security and Strategy team in the Foreign Policy program at Brookings. She was commander of all air forces in the Pacific.

The rest is here:
The Evolution of Artificial Intelligence and Future of National Security - The National Interest

If we use it correctly, artificial intelligence could help us fight the next epidemic – Genetic Literacy Project

It was an AI that first saw it coming, or so the story goes. On December 30, an artificial-intelligence company called BlueDot, which usesmachine learningto monitor outbreaks of infectious diseases around the world, alerted clientsincluding various governments, hospitals, and businessesto an unusual bump in pneumonia cases in Wuhan, China. It would be another nine days before the World Health Organization officially flagged what weve all come to know asCovid-19.

That AI could spot an outbreak on the other side of the world is pretty amazing, and early warnings save lives. But how much has AI really helped in tackling the current outbreak?

The hype outstrips the reality. In fact, the narrative that has appeared in many news reports and breathless press releasesthat AI is a powerful new weapon against diseasesis only partly true and risks becoming counterproductive. For example, too much confidence in AIs capabilities could lead to ill-informed decisions that funnel public money to unproven AI companies at the expense of proven interventions.

So heres a reality check: AI will not save us from the coronaviruscertainly not this time. But theres every chance it will play a bigger role in future epidemicsif we make some big changes.

Read the original post

Read the original:
If we use it correctly, artificial intelligence could help us fight the next epidemic - Genetic Literacy Project

Venture Capitalist Tim Draper: Bitcoin, decentralization and artificial intelligence could transform global industries – FXStreet

According to the renowned billionaire venture capitalist Tim Draper, decentralization is revolutionizing the currency systems around the world using the largest cryptocurrency by market capitalization, Bitcoin. Bitcoin, riding on the decentralization is converging with another technology that is also going to have a big impact which is artificial intelligence. Draper added:

Those technologies now have the ability to transform the biggest industries in the world. It is not just currency. It is banking and finance, insurance, real estate, healthcare, government. All those industries all in the trillions of dollars, they are hugely valuable, have the potential to be transformed by these new technologies.

Putting his comments into perspective in a new episode of 415 stories, Draper said that he can develop an insurance firm featuring an AI to detect fraud and utilizing smart contracts and Bitcoin, allow it to run on blockchain.

Follow this link:
Venture Capitalist Tim Draper: Bitcoin, decentralization and artificial intelligence could transform global industries - FXStreet

IIT-M to reskill women in artificial intelligence – The Hindu

The Indian Institute of Technology-Madras is offering 150 hours of training to reskill women who have taken a break from their career.

The certification course includes artificial intelligence, machine learning, cyber security data science and big data.

The Career Back 2 Women is an initiative through the Institutes Digital Skills Academy. Candidates can choose the level of training.

The institute has tied up with the Forensic Intelligence Surveillance and Security Technologies to offer the programme.

IIT-M director Bhaskar Ramamurthi said, In the IT field, the technology changes are so rapid that they [women who take a break] are unable to get back to their careers as their skills are probably outdated. Despite this, their industry experience and knowledge about IT are immense and can be useful to many IT companies if they can fit into current requirements immediately. IIT-Madras is happy to pioneer this programme to help them get back to work and retrieve their careers.

Women who complete the advance module in select tracks would also receive assistance in job placement.

Digital Skills Academy, IIT-Madras, also plans to offer more courses at various levels for students and working professionals in association with NASSCOM and in partnership with training companies incubated at IIT Madras Research Park and industry partners.

K. Mangala Sunder, Head, Digital Skills Academy, said, IIT-M works with NASSCOM IT-ITeS Sector Skill Council to ensure that right industry partners are involved in training. Faculty from premier institutions provide fundamental knowledge to all learners.

According to C. Mohan Ram, Chief Mission Integrator and Innovator, FISST, all participants will take a 20-hour programme after which they can choose their area of specialisation. There are four tracks offered initially. Each track has basic and advanced modules.

You have reached your limit for free articles this month.

Register to The Hindu for free and get unlimited access for 30 days.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Not convinced? Know why you should pay for news.

*Our Digital Subscription plans do not currently include the e-paper ,crossword, iPhone, iPad mobile applications and print. Our plans enhance your reading experience.

See the article here:
IIT-M to reskill women in artificial intelligence - The Hindu

Conference Artificial intelligence Intelligent politics: Challenges and opportunities for media and democracy postponed to October 2020 – Council of…

In view of the outbreak of the Coronavirus (COVID-19) the co-organisers, the Council of Europe and the Government of the Republic of Cyprus, have carefully assessed the situation and after due consideration regarding the health and safety of the participants, have jointly agreed to postpone the Conference of Ministers responsible for Media and Information Society on the theme: Artificial intelligence Intelligent politics: Challenges and opportunities for media and democracy, originally scheduled to take place on 28 and 29 May 2020 in Nicosia, Cyprus, to 22 and 23 October 2020.

We thank all speakers and participants for their understanding and invite them to mark the new dates on their calendars.

Please note that any room booking at the Landmark Nicosia Hotel made with a special rate for Conference participants needs to be cancelled, failing which standard rate will apply.

Further updates will be made available on the Conference website: http://www.coe.int/media2020nicosia

Read the original post:
Conference Artificial intelligence Intelligent politics: Challenges and opportunities for media and democracy postponed to October 2020 - Council of...

Artificial Intelligence (AI) in Supply Chain Market to Grow at a CAGR of 45.3% to Reach $21.8 billion by 2027, Largely Driven by the Consistent…

London, March 11, 2020 (GLOBE NEWSWIRE) -- TheArtificial Intelligence (AI) in supply chain market is expected to grow at a CAGR of 45.3% from 2019 to 2027 to reach $21.8 billion by 2027.

Artificial intelligence has emerged as the most potent technologies over the past few years, that is transitioning the landscape of almost all industry verticals. Although enterprise applications based on AI and Machine Learning (ML) are still in the nascent stages of development, they are gradually beginning to drive innovation strategies of the business.

DownloadFree Sample Report Now @https://www.meticulousresearch.com/download-sample-report/cp_id=5064

In the supply chain and logistics industry, artificial intelligence is gaining rapid traction among industry stakeholders. Players operating in the supply chain and logistics industry are increasingly realizing the potential of AI to solve the complexities of running a global logistics network. Adoption of artificial intelligence in the supply chain is routing a new era or industrial transformation, allowing the companies to track their operations, enhance supply chain management productivity, augment business strategies, and engage with customers in digital world.

The growth in the AI in supply chain market is mainly driven by rising awareness of artificial intelligence and big data & analytics and widening implementation of computer vision in both autonomous & semiautonomous applications. In addition, consistent technological advancements in the supply chain industry, rising demand for AI-based business automation solutions, and evolving supply chain complementing growing industrial automation are further offering opportunities for vendors providing AI solutions in the supply chain industry. However, high deployment and operating costs and lack of infrastructure hinder the growth of the artificial intelligence in supply chain market.

In this study, the global artificial intelligence(AI) in supply chain market is segmented on the basis of component, application, technology, end user, and geography.

Based on component, AI in supply chain market is broadly segmented into hardware, software, and services. The software segment commanded the largest share of the overall AI in supply chain market in 2019. This can be attributed to the increasing demand for AI-based platforms and solutions, as they offer supply chain visibility through software, which include inventory control, warehouse management, order procurement, and reverse logistics & tracking.

Based on technology, AI in supply chain market is broadly segmented into machine learning, computer vision, natural language processing, and context-aware computing. In 2019, the machine learning segment commanded the largest share of the overall AI in supply chain market. The growth in this market can be attributed to the growing demand for AI based intelligent solutions; increasing government initiatives; and the ability of AI solutions to efficiently handle and analyze big data and quickly scan, parse, and react to anomalies.

Based on application, AI in supply chain market is broadly segmented into supply chain planning, warehouse management, fleet management, virtual assistant, risk management, inventory management, and planning & logistics. In 2019, the supply chain planning segment commanded the largest share of the overall AI in supply chain market. The growth of this segment can be attributed to the increasing demand for enhancing factory scheduling & production planning and the evolving agility and optimization of supply chain decision-making. In addition, digitizing existing processes and workflows to reinvent the supply chain planning model is also contributing to the growth of this segment.

To gain more insights into the market with a detailed table of content and figures, click here:https://www.meticulousresearch.com/product/artificial-intelligence-ai-in-supply-chain-market-5064/

Based on end-user, artificial intelligence(AI) in supply chain market is broadly segmented into manufacturing, food & beverage, healthcare, automotive, aerospace, retail, and consumer packaged goods sectors. The retail sector commanded the largest share of the overall AI in supply chain market in 2019. This can be attributed to the increase in demand for consumer retail products.

Based on geography, the global artificial intelligence (AI) in supply chain market is categorized into five major geographies, namely, North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa. In 2019, North America commanded for the largest share of the globalAI in supply chain market, followed by Europe, Asia-Pacific, Latin America, and the Middle East & Africa. The large share of the North American region is attributed to the presence of developed economies focusing on enhancing the existing solutions in the supply chain space, and the existence of major players in this market along with a high willingness to adopt advanced technologies.

On the other hand, the Asia-Pacific region is projected to grow at the fastest CAGR during the forecast period. The high growth rate is attributed to rapidly developing economies in the region; presence of young and tech-savvy population in this region; growing proliferation of Internet of Things (IoT); rising disposable income; increasing acceptance of modern technologies across several industries including automotive, manufacturing, and retail; and broadening implementation of computer vision technology in numerous applications. Furthermore, the growing adoption of AI-based solutions and services among supply chain operations, increasing digitalization in the region, and improving connectivity infrastructure are also playing a significant role in the growth of this AIin supply chain market in the region.

The global artificial intelligence in supply chain market is fragmented in nature and is characterized by the presence of several companies competing for the market share. Some of the leading companies in the AIin supply chain market are from the core technology background. These include IBM Corporation (U.S.), Microsoft Corporation (U.S.), Google LLC (U.S.), and Amazon.com, Inc. (U.S.). These companies are leading the market owing to their strong brand recognition, diverse product portfolio, strong distribution & sales network, and strong organic & inorganic growth strategies.

The other key players operating in the globalAI in supply chain market are Intel Corporation (U.S.), Nvidia Corporation (U.S.), Oracle Corporation (U.S.), Samsung (South Korea), LLamasoft, Inc. (U.S.), SAP SE (Germany), General Electric (U.S.), Deutsche Post DHL Group (Germany), Xilinx, Inc. (U.S.), Micron Technology, Inc. (U.S.), FedEx Corporation (U.S.), ClearMetal, Inc. (U.S.), Dassault Systmes (France), and JDA Software Group, Inc. (U.S.), among others.

DownloadFree Sample Report Now @https://www.meticulousresearch.com/download-sample-report/cp_id=5064

Key questions answered in the report-

Related Reports:

Artificial Intelligence in Manufacturing Marketby Offering (Hardware, Software, and Services), End-use Industry (Semiconductors and Electronics, Energy and power, Pharmaceuticals, Chemical, Medical Devices, Automobile, Heavy Metal and Machine Manufacturing, Food and Beverages, Others), Technology (Machine Learning, NLP, Context-Aware Computing, and Computer Vision), Application (Predictive Maintenance, Material Movement, Production Planning, Field Services, Quality Management, Cybersecurity, Industrial Robotics, and Reclamation), and Region - Global Forecast to 2025

Artificial Intelligence in Healthcare Marketby Product (Hardware, Software, Services), Technology (Machine Learning, Context-Aware Computing, NLP), Application (Drug Discovery, Precision Medicine), End User, And Geography - Global Forecast To 2025

Artificial Intelligence in Retail Marketby Product (Chatbot, Customer Relationship Management), Application (Programmatic Advertising), Technology (Machine Learning, Natural Language Processing), Retail (E-commerce and Direct Retail)- Forecast to 2025

Automotive Artificial Intelligence Marketby Offering (Hardware, Software), Technology (Machine Learning, Deep Learning, Computer Vision, Context Awareness, Natural Language Processing), Process (Signal Recognition, Image Recognition, Voice Recognition, Data Mining), Drive (Autonomous Drive, Semi-autonomous Drive), and Region Global Forecast to 2025

Artificial Intelligence in Security Marketby Offering (Hardware, Software, Service), Security Type (Network Security, Application Security), Technology (Machine Learning, NLP, Context Awareness,), Solution, End-User, and Region - Global Forecast to 2027

Meticulous Researchalso offersCustom Researchservices providing focused, comprehensive and tailored research.

About Meticulous Research

The name of our company defines our services, strengths, and values. Since the inception, we have only thrived to research, analyze and present the critical market data with great attention to details.

Meticulous Research was founded in 2010 and incorporated as Meticulous Market Research Pvt. Ltd. in 2013 as a private limited company under the Companies Act, 1956. Since its incorporation, with the help of its unique research methodologies, the company has become the leading provider of premium market intelligence in North America, Europe, Asia-Pacific, Latin America, and Middle East & Africa regions.

With the meticulous primary and secondary research techniques, we have built strong capabilities in data collection, interpretation, and analysis of data including qualitative and quantitative research with the finest team of analysts. We design our meticulously analyzed intelligent and value-driven syndicate market research reports, custom studies, quick turnaround research, and consulting solutions to address business challenges of sustainable growth.

See the original post here:
Artificial Intelligence (AI) in Supply Chain Market to Grow at a CAGR of 45.3% to Reach $21.8 billion by 2027, Largely Driven by the Consistent...

Cybersecurity pros are using artificial intelligence but still prefer the human touch – TechRepublic

More than half of organizations have adopted AI for security efforts, but a majority are more confident in results verified by humans, according to WhiteHat Security.

Security professionals need a varied bag of tricks to keep up with savvy and sophisticated cybercriminals. Artificial intelligence is one valuable weapon in the arsenal as it can handle certain tasks faster and more efficiently than can human beings. But AI being AI, it's far from perfect. That's why many security pros still want the human element to play a significant role in their security defense, according to a survey from WhiteHat Security.

SEE:The 10 most important cyberattacks of the decade (free PDF)(TechRepublic)

Based on a survey of 102 industry professionals conducted at the RSA Conference 2020, WhiteHat's "AI and Human Element Security Sentiment Study" found that more than half of the respondents are using AI or machine learning (ML) in their security efforts. More than 20% said that AI-based tools have made their cybersecurity teams more efficient by eliminating a huge number of more mundane tasks.

Image: WhiteHat Security

Further, almost 40% of respondents said they feel their stress levels have dropped since adding AI tools to their security process. And among those, 65% said that AI tools let them focus more on migitating and preventing cyberattacks than they could previously.

However, incorporating AI doesn't take human beings out of the security equation; just the opposite. A majority of those polled agreed that the human element offers skills that AI and ML can't match.

Almost 60% of the respondents said they remain more confident in cyberthreat findings that are verified by human over AI. When asked why they prefer the human touch, 30% pointed to intuition as the most important human element, 21% mentioned the role of creativity, and almost 20% cited previous experience and frame of reference as the most critical advantage of humans over AI.

On its end, WhitePoint described three reasons it supplements its own AI and ML learning systems with human verification:

Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices. Delivered Tuesdays and Thursdays

Image: Getty Images/iStockphoto

Read more from the original source:
Cybersecurity pros are using artificial intelligence but still prefer the human touch - TechRepublic

The VA Has Embraced Artificial Intelligence To Improve Veterans’ Health Care – KPBS

Wednesday, March 11, 2020

Stephanie Colombini/American Homefront

Credit: Stephanie Colombini/American Homefront

Above: Drs. Andrew Borkowski (left) and Stephen Mastorides analyze slides under a microscope to spot cancer in tissue samples in this undated photo.

Aired 3/11/20 on KPBS News

Listen to this story by Stephanie Colombini.

Inside a laboratory at the James A. Haley Veterans' Hospital in Tampa, Fla., machines are rapidly processing tubes of patients' body fluids and tissue samples. Pathologists examine those samples under microscopes to spot signs of cancer and other diseases.

But distinguishing certain features about a cancer cell can be difficult, so Drs. Stephen Mastorides and Andrew Borkowski, decided to get a computer involved.

In a series of experiments, they uploaded hundreds of images of slides containing lung and colon tissues into artificial intelligence software. Some of the tissues were healthy, while others had different types of cancer, including squamous cell and adenocarcinoma.

Then they tested software with more images the computer had never seen before.

"The module was able to put it together, and it was able to differentiate, 'Is it a cancer or is it not a cancer?'" Borkowski said. "And not only that, but it was also able to say what kind of cancer is it."

The doctors were harnessing the power of what's known as machine learning. Software pre-trained with millions of images, like dogs and trees, can learn to distinguish new ones. Mastorides, chief of pathology and laboratory medicine services at the Tampa VA, said it took only minutes to teach the computer what cancerous tissue looks like.

The two VA doctors recently published a study comparing how different AI programs performed when training computers to diagnose cancer.

"Our earliest studies showed accuracies over 95 percent," Mastorides said.

Enhance, not replace

The doctors said the technology could be especially useful in rural veterans clinics, where pathologists and other specialists aren't easily accessible, or in crowded VA emergency rooms, where being able to spot something like a brain hemorrhage faster could save more lives.

Borkowski. the chief of the hospital's molecular diagnostics section, said he sees AI as a tool to help doctors work more efficiently, not to put them out of a job.

"It won't replace the doctors, but the doctors who use AI will replace the doctors that don't," he said.

The Tampa pathologists aren't the first to experiment with machine learning in this way. The U.S. Food and Drug Administration has approved about 40 algorithms for medicine, including apps that predict blood sugar changes and help detect strokes in CT scans.

The VA already uses AI in several ways, such as scanning medical records for signs of suicide risks. Now the agency is looking to expand research into the technology.

The department announced the hiring of Gil Alterovitz as its first-ever Artificial Intelligence Director in July 2019 and launched The National Artificial Intelligence Institute in November. Alterovitz is a Harvard Medical School professor who co-wrote an artificial intelligence plan for the White House last year.

He said the VA has a "unique opportunity to help veterans" with artificial intelligence.

As the largest integrated health care system in the country, the VA has vast amounts of patient data, which is helpful when training AI software to recognize patterns and trends. Alterovitz said the health system generates about a billion medical images a year.

He described a potential future where AI could help combine the efforts of various specialists to improve diagnoses.

"So you might have one site where a pathologist is looking at slides, and then a radiologist is analyzing MRI and other scans that look at a different level of the body," he said. "You could have an AI orchestrator putting together different pieces and making potential recommendations that teams of doctors can look at."

Alterovitz is also looking for other uses to help VA staff members make better use of their time and help patients in areas where resources are limited.

"Being able to cut the (clinician) workload down is one way to do that," he said. "Other ways are working on processes, so reducing patient wait times, analyzing paperwork, etc."

Barriers to AI

But Alterovitz notes there are challenges to implementing AI, including privacy concerns and trying to understand how and why AI systems make decisions.

Last year, DeepMind Technologies, an AI firm owned by Google, used VA data to test a system to predict deadly kidney disease. But for every correct prediction, there were two false positives.

Those false results may cause doctors to recommend inappropriate treatments, run unnecessary tests, or do other things that could harm patients, waste time, and reduce confidence in the technology.

"It's important for AI systems to be tested in real-world environments with real-world patients and clinicians, because there can be unintended consequences," said Mildred Cho, the Associate Director of the Stanford Center for Biomedical Ethics.

Cho also said it's important to test AI systems with a variety of demographics, because what may work for one population may not for another. The DeepMind study acknowledged that more than 90 percent of the patients in the dataset it used to test the system were male veterans, and that performance was lower for females.

Alterovitz said the VA is taking those concerns into account as the agency experiments with AI and tries to improve upon the technology to ensure it is reliable and effective.

This story is part of our American Homefront Project, a public media collaboration on in-depth military coverage with funding from the Corporation for Public Broadcasting and The Patriots Connection.

KPBS' daily news podcast covering local politics, education, health, environment, the border and more. New episodes are ready weekday mornings so you can listen on your morning commute.

To view PDF documents, Download Acrobat Reader.

Original post:
The VA Has Embraced Artificial Intelligence To Improve Veterans' Health Care - KPBS