IBM Leverages Artificial Intelligence to Automate IT Infrastructure – HITInfrastructure.com

May 07, 2020 -IBM recently launched new capabilities and services powered by artificial intelligence to help chief information officers (CIOs) help their businesses recover and restart in the wake of the COVID-19 pandemic.

IBMs Think Digital conference intends to help CIOs automate their IT infrastructures to become more resilient to disruptions and reduce costs.

What we've learned from companies all over the world is that there are three major factors that will determine the success of AI in business language, automation and trust, saidRob Thomas, senior vice president, cloud and data platform at IBM.

The COVID-19 crisis and increased demand for remote work capabilities are driving the need for AI automation at an unprecedented rate and pace. With automation, we are empowering next generation CIOs and their teams to prioritize the crucial work of today's digital enterprisesmanaging and mining data to apply predictive insights that help lead to more impactful business results and lower cost.

IDC, a market intelligence firm, predicted that enterprises that are powered by AI will be able to respond to customers, competitors, regulators, and partners 50 percent faster than those not using AI by 2024.

To that end, IBM launched IBM Watson AIOps as part of the Think Digital conference, which uses AI to automate how enterprises self-detect, diagnose, and respond to IT anomalies in real-time.

IT incidents that are unforeseen cost businesses. Watson AIOps allow organizations to introduce automation at the infrastructure level to help CIOs predict and shape future outcomes, focus on higher-value work, and build more responsive and intelligent networks that can stay up and running longer, IBM stated.

Built on Red Hat OpenShift,Watson AIOpsruns across any cloudenvironment and works in collaboration with an ecosystem of partners, including Slack and Box.

The greatest challenge for organizations is one of alignment. Slack is most valuable when it integrates tightly with the other tools customers use every day, bringing critical business information into channels where it can be collaborated on by teams, said Stewart Butterfield, Slack CEO and co-founder. By using Slack with Watson AIOps, IT operators can effectively collaborate on incident solutions, allowing them to spend critical time solving problems rather than identifying them.

Aaron Levie, CEO of Box also noted that secure data sharing and accessing files is vital, especially during this time of remote work.

It is more important than ever before and we're thrilled to expand our partnership with IBM to deliver content and collaboration across Watson AIOps,enablingIT organizations and businesses to get work done faster, simpler, and more securely, Levie said in the IBM announcement.

As part of Think Digital conferences rollout, IBM also launched the Accelerator for Application Modernization with AI, which was designed to help clients decrease the effort and costs associated with application modernization.

The accelerator includes various tools designed to optimize modernization and boost the analysis and recommendations for architectural and microservices options, IBM said. The accelerator uses learning AI models to adapt to preferred software engineering practices and stay up to date with the evolution of technology and platforms.

In addition to the Think Digital conference, IBM announced various new and updated capabilities designed to give CIOs a guideline for successfully operating during a pandemic. The new capabilities are designed to help automate business planning, business operations, and call centers.

IBM Cloud Pak for Data, IBM's fully integrated data and AI platform, was also updated with new capabilities tohelp business leaders automate the access to critical business-ready data. And a new update toIBM Cloud Pak Automation, software for designing, building and running automation apps allows clients to easily create AI digital worker automation solutions.

Additionally, updates to IBM Watson Assistant, IBM's AI-based conversation platform, will automate the more complex interactions and potentially boost customer satisfaction while reducing operating costs. The assistant now has a pre-built user interface that requires no development effort to deploy and is designed with user experience-based best practices, IBM stated.

Link:
IBM Leverages Artificial Intelligence to Automate IT Infrastructure - HITInfrastructure.com

Global Artificial Intelligence in Agriculture Industry (2020 to 2026) – Developing Countries to Offer Significant Growth Opportunities – GlobeNewswire

Dublin, May 06, 2020 (GLOBE NEWSWIRE) -- The "Artificial Intelligence in Agriculture Market by Technology (Machine Learning, Computer Vision, and Predictive Analytics), Offering (Software, Hardware, AI-as-a-Service, and Services), Application, and Geography - Global Forecast to 2026" report has been added to ResearchAndMarkets.com's offering.

The AI in the agriculture market is projected to grow at a CAGR of 25.5% from 2020 to 2026.

The AI in agriculture market growth is propelled by the increasing implementation of data generation through sensors and aerial images for crops, increasing crop productivity through deep-learning technology, and government support for the adoption of modern agricultural techniques. However, the high cost of gathering precise field data restrains the market growth. Developing countries, such as China, Brazil, and India, are likely to provide an opportunity for the players in the AI in agriculture market due to the increasing use of unmanned aerial vehicles/drones by these countries in their agricultural farms.

By technology, the machine learning segment is estimated to account for the largest share of the AI in the agriculture market during the forecast period.

Machine learning-enabled solutions are being significantly adopted by agricultural organizations and farmers worldwide to enhance farm productivity and to gain a competitive edge in business operations. In the coming years, the application of machine learning in various agricultural practices is expected to rise exponentially.

By offering, the AI-as-a-Service segment is projected to register the highest CAGR from 2020 to 2026.

Increasing demand for machine learning tool kits and applications that are available in AI-based services, along with benefits, such as advanced infrastructure at minimal cost, transparency in business operations, and better scalability, is leading to the growth of the AI-as-a-Service segment.

By application, the precision farming segment held the largest market size in 2019.

Precision farming involves the usage of innovative artificial intelligence (AI) technologies, such as machine learning, computer vision, and predictive analytics tools, for increasing agriculture productivity. It comprises a technology-driven analysis of data acquired from the fields for increasing crop productivity. Precision farming helps in managing variations in the field accurately, thus enabling the growth of more crops using fewer resources and at reduced production costs. Precision devices integrated with AI technologies help in collecting farm-related data, thereby helping the farmers make better decisions and increase the productivity of their lands

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights 4.1 Attractive Opportunities for the AI in the Agriculture Market4.2 AI in Agriculture Market, by Offering4.3 AI in Agriculture Market, by Technology4.4 AI in Agriculture Market for Apac, by Application & Country4.5 AI in Agriculture Market, by Geography

5 Market Overview 5.1 Introduction5.2 Market Dynamics5.2.1 Drivers5.2.1.1 Increasing Strain on Global Food Supply Owing to Rising Population5.2.1.2 Increasing Implementation of Data Generation Through Sensors and Aerial Images for Crops5.2.1.3 Increasing Crop Productivity Through Deep Learning Technology5.2.1.4 Government Support to Adopt Modern Agricultural Techniques5.2.2 Restraints5.2.2.1 High Cost of Gathering Precise Field Data5.2.3 Opportunities5.2.3.1 Developing Countries to Offer Significant Growth Opportunities5.2.3.2 Use of AI Solutions to Manage Small Farms (Less than 5 Hectares)5.2.4 Challenges5.2.4.1 Lack of Standardization5.2.4.2 Lack of Awareness About AI Among Farmers5.2.4.3 Limited Availability of Historical Data5.3 Value Chain Analysis5.4 Impact of Covid-19 on AI in Agriculture Market

6 Artificial Intelligence in Agriculture Market, by Technology 6.1 Introduction6.2 Machine Learning6.2.1 Machine Learning Technology to Hold the Largest Share of AI in Agriculture Market6.3 Computer Vision6.3.1 Computer Vision Technology is Expected to Grow at the Highest CAGR during the Forecast Period6.4 Predictive Analytics6.4.1 Increasing Predictive Analytics Applications is Expected to Drive the Growth of AI in Agriculture Market

7 Artificial Intelligence in Agriculture Market, by Offering 7.1 Introduction7.2 Hardware7.2.1 Technological Advancements in the Hardware Segment is Leading to the Widespread Adoption of AI in Agriculture7.2.2 Processor7.2.3 Storage Device7.2.4 Network7.3 Software7.3.1 AI in Agriculture Market for Software Segment is Projected to Hold the Largest Market Share during the Forecast Period7.3.2 AI Platform7.3.3 AI Solution7.4 Ai-As-A-Service7.4.1 Ai-As-A-Service Segment is Expected to Grow at the Highest CAGR during the Forecast Period7.5 Services7.5.1 Increasing Requirement of Online and Offline Support Services is Leading to the Growth of This Segment7.5.2 Deployment & Integration7.5.3 Support & Maintenance

8 Artificial Intelligence in Agriculture Market, by Application 8.1 Introduction8.2 Precision Farming8.2.1 Precision Farming is Expected to Hold the Largest Market Share during the Forecast Period8.2.2 Yield Monitoring8.2.3 Field Mapping8.2.4 Crop Scouting8.2.5 Weather Tracking & Forecasting8.2.6 Irrigation Management8.3 Livestock Monitoring8.3.1 Increasing Livestock Monitoring Applications is Driving the Growth of This Segment8.4 Drone Analytics8.4.1 Drone Analytics Application Expected to Grow at the Highest CAGR during the Forecast Period8.5 Agriculture Robots8.5.1 Increased Deep Learning Capabilities of Agriculture Robots is Driving the Growth of This Segment8.6 Labor Management8.6.1 Major Benefits Such As Reduced Production Costs Due to Labor Management Application is Leading to the Growth of This Segment8.7 Others8.7.1 Smart Greenhouse Management8.7.2 Soil Management8.7.2.1 Moisture Monitoring8.7.2.2 Nutrient Monitoring8.7.3 Fish Farming Management

9 Geographic Analysis 9.1 Introduction9.2 Americas9.2.1 North America9.2.1.1 Us9.2.1.1.1 Us Projected to Account for the Largest Size of the AI in Agriculture Market in North America9.2.1.2 Canada9.2.1.2.1 Increasing AI Technology Adoption is Leading to the Growth of Canadian AI in Agriculture Market9.2.1.3 Mexico9.2.1.3.1 AI in Agriculture Market in Mexico is Projected to Grow at the Highest CAGR during the Forecast Period9.2.2 South America9.2.2.1 Brazil9.2.2.1.1 Brazil Expected to Hold the Largest Share in the South American AI in Agriculture Market9.2.2.2 Argentina9.2.2.2.1 Expanding Industrial Production in Argentina is Driving the Market9.2.2.3 Rest of South America9.3 Europe9.3.1 Uk9.3.1.1 Increasing Adoption of Ai-Based Solutions for Agriculture is Driving the Uk Market9.3.2 Germany9.3.2.1 Germany Held the Largest Share of European AI in Agriculture Market in 20199.3.3 France9.3.3.1 Increasing Number of Start-Ups Developing AI Solutions for Agriculture is Driving the Market in Europe9.3.4 Italy9.3.4.1 AI in Agriculture Market in Italy is Growing Steadily to Overcome Drastic Climate Conditions9.3.5 Spain9.3.5.1 Favorable Government Policies are Driving the AI in Agriculture Market in Spain9.3.6 Rest of Europe9.4 Asia Pacific9.4.1 Australia9.4.1.1 Australia Expected to Hold the Largest Share of the AI in Agriculture Market in Apac9.4.2 China9.4.2.1 Increasing Precision Farming Applications in China is Expected to Drive the AI in Agriculture Market for Apac9.4.3 Japan9.4.3.1 in 2019, Japan Held the Second-Largest Share of AI in Agriculture Market in Apac9.4.4 South Korea9.4.4.1 Government Funding and Initiatives are Driving the Growth of AI in Agriculture Market in South Korea9.4.5 India9.4.5.1 India is Expected to be the Fastest-Growing AI in Agriculture Market in Apac9.4.6 Rest of Apac9.5 Rest of the World9.5.1 Increasing Awareness Among Farmers Regarding the Benefits of AI-Assisted Agricultural Operations is Driving the Market in Row

10 Competitive Landscape 10.1 Overview10.2 Ranking Analysis10.3 Competitive Scenario10.3.1 Product Launches and Developments10.3.2 Partnerships, Agreements, and Collaborations10.3.3 Mergers and Acquisitions10.4 Competitive Leadership Mapping10.4.1 Visionary Leaders10.4.2 Dynamic Differentiators10.4.3 Innovators10.4.4 Emerging Companies

11 Company Profiles 11.1 Key Players11.1.1 IBM11.1.2 Deere & Company11.1.3 Microsoft11.1.4 the Climate Corporation11.1.5 Farmers Edge11.1.6 Granular11.1.7 Ageagle11.1.8 Descartes Labs11.1.9 Prospera11.1.10 Taranis11.1.11 Awhere11.2 Right-To-Win11.3 Other Key Companies11.3.1 Gamaya11.3.2 Ec2Ce11.3.3 Precision Hawk11.3.4 Vineview11.3.5 Cainthus11.3.6 Tule Technologies11.3.7 Resson11.3.8 Connecterra11.3.9 Vision Robotics11.3.10 Farmbot11.3.11 Harvest Croo11.3.12 Peat11.3.13 Autonomous Tractor Corporation11.3.14 Trace Genomics11.3.15 Cropx Technologies

For more information about this report visit https://www.researchandmarkets.com/r/sqhb4s

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Continue reading here:
Global Artificial Intelligence in Agriculture Industry (2020 to 2026) - Developing Countries to Offer Significant Growth Opportunities - GlobeNewswire

Theres Nothing Fake About These 10 Artificial Intelligence Stocks to Buy – InvestorPlace

Artificial intelligence is one of those catchy phrases that continues to grab investors attention. Like 5G, it tugs on the sleeves of those looking to get in on cutting-edge technology. While it is a very important sector of technology, investors need to be wary of hype and focus on reality before buying AI stocks.

Take, for example, International Business Machines (NYSE:IBM). IBM has been on the front line of AI with its Watson-branded products and services. Sure, it did a bang up job on Jeopardy and it partners with dozens of companies. But for IBM shareholders, Watson is not a portfolio favorite.

Over the past five years, IBM has lost 28.7% in price compared to the S&P 500s gain of 37.5% and the S&P Information Technology Indexs gain of 130%. And over the past 10 years, IBMs AI leadership has generated a shareholder loss of 3.4%.

Source: Chart by Bloomberg

IBM (White), S&P 500 (Red) & S&P 500 Information Technology (Gold) Indexes Total Return

But AI is more than just a party trick like Watson. AI brings algorithms into computers. These algorithms then take internal and external data, and in turn process decisions behind all sorts of products and services. Think for example something as simple as targeted ads. Data is gathered and processed while you simply shop online.

But AI can go much further. Think, of course, of autonomous vehicles. AI takes all sorts of input data and the central processor makes calls to how the vehicle moves and at what speed and direction.

Or in medicine, AI brings quicker analysis of symptoms, diagnostic data and tests.

And the list goes on.

So then what do I bring to the table as a human? I have found ten AI stocks that arent just companies using AI. These are companies to own and follow for years complete with dividends along the way.

Lets start with the index of the best technology companies found inside that S&P Information Technology index cited earlier. The Vanguard Information Technology ETF (NYSEARCA:VGT) synthetically invests in the leaders of that index. It should be the starting point for all technology investing it offers a solid foundation for portfolios.

Source: Chart by Bloomberg

Vanguard Information Technology ETF (VGT) Total Return

The exchange-traded fund continues to perform well. Its return for just the past five years runs at 141.1% for an average annual equivalent return of 19.2%. This includes the major fall in March 2020.

Before I move to the next of my AI stocks, its important to note that data doesnt just get collected. It also has to be communicated quickly and efficiently to make processes work.

Take the AI example mentioned earlier for autonomous vehicles. AI driving needs to know not just what is in front of the vehicle, but what is coming around the next corner. This means having dependable data transmission. And the two leaders that make this happen now and will continue to do so with 5G are AT&T (NYSE:T) and Verizon (NYSE:VZ).

Source: Chart by Bloomberg

AT&T (White) & Verizon (VZ) Total Return

Much like other successful AI stocks, AT&T and Verizon have lots of communications services and content. This provides some additional opportunities and diversification but can limit investor interest in the near term. This is the case with AT&T and its Time Warner content businesses. But this also means that right now, both of these stocks are good bargains.

And they have a history of delivering to shareholders. AT&T has returned 100% over the past 10 years, while Verizon has returned 242%.

AI takes lots of equipment. Chips, processors and communications gear all go into making AI computers and devices. And you should buy these two companies for their role in equipment: Samsung Electronics (OTCMKTS:SSNLF) and Ericsson (NASDAQ:ERIC).

Samsung is one of the global companies that is essential for nearly anything that involves technology and hardware.Hardly any device out there isnt a either a Samsung product or has components invented and produced by Samsung.

And Ericsson is one of the leaders in communications gear and systems. Its products makes AI communications and data transmission work, including on current 4G and 5G.

Source: Chart by Bloomberg

Samsung Electronics (White) & Ericsson (Red) Total Return

Over the past 10 years Samsung has delivered a return of 235.4% in U.S. dollars while Ericsson has lagged, returning a less-than-stellar 6.5%.

Both have some challenges in their stock prices. Samsungs shares are more challenging to buy in the U.S. And Ericsson faces economic challenges as its deep in the European market. But in both cases, you get great products from companies that are still value buys.

Samsung is valued at a mere 1.2 times book and 1.3 times trailing sales, which is significantly cheaper than its global peers. And Ericsson is also a bargain, trading at a mere 1.3 times trailing sales.

To make AI work you need lots of software. This brings in Microsoft (NASDAQ:MSFT). The company is one of the cornerstones of software its products have all sorts of tech uses.

And AI especially on the move needs quick access to huge amounts of data in the cloud. Microsoft and its Azure-branded cloud fits the bill.

Source: Chart by Bloomberg

Microsoft (MSFT) Total Return

Microsoft, to me, is the poster child of successful technology companies. It went from one-off unit sales of packaged products to recurring income streams from software subscriptions. Now its pivoting to cloud services. And shareholders continue to see rewards. The companys stock has returned 702.7% over the past 10 years alone.

AI and the cloud are integral in their processing and storage of data. But beyond software and hardware, you need to stack off the hardware, complete with power and climate controls, transmission lines and wireless capabilities.

This means data centers. And there are two companies set up as real estate investment trusts (REITs) that lead the way with their real estate and data centers. These are Digital Realty Trust (NYSE:DLR) and Corporate Office Properties (NYSE:OFC).

Digital Realty has the right name, as Corporate Office Properties doesnt tell the full story. The latter company has Amazon (NASDAQ:AMZN) and its Amazon Web Services (AWS) as exclusive clients in core centers, including the vital hub in Northern Virginia.

And the stock-price returns show the power of the name. Digital Realty has returned 310.9% against a loss of 0.9% for Corporate Office Properties.

Source: Chart by Bloomberg

Corporate Office Properties (White) & Digital Realty (Red) Total Return

But this means that while both are good buys right now, Corporate Office Properties is a particular bargain. The stock price is at a mere 1.7 times the companys book value.

Now Ill get to the newer companies in the AI space. These are the companies that are in various stages of development. Some are private now, or are pending public listings. Others are waiting for larger companies to snap them up.

Most individual investors unless they have a net worth nearing $1 billion dont get access. But I have a company that brings this access, and its my stock for theInvestorPlaceBest Stocks for 2020 contest.

Hercules Capital (NYSE:HTGC) is set up as business development company (BDC) that provides financing to all levels of technology companies. Along the way, it takes equity participation in these companies.

It supports hundreds of current technology companies using or developing AI for products and services along with a grand list of past accomplishments. The current portfolio can be found here.

I have followed this company since its early days. I like that it is very investor focused, complete with big dividend payments throughout the many years. And it has returned 184.3% over just the past 10 years alone.

Source: Chart by Bloomberg

Hercules Capital (HTGC) Total Return

Who doesnt buy goods and services from Amazon? I am a prime member with video, audio and book services. And I also have many Alexa devices that I use throughout the day. While I dont contract directly with its AWS, I use its cloud storage as part of other services. Few major companies that are part daily life make use of AI more than Amazon.

The current lockdown mess has made Amazon a further necessity. Toilet paper, paper towels, cleaning supplies, toothpaste, soap and so many other items are sold and delivered by Amazon.

And I also use the platform for additional digital information from the Washington Post. Plus, I get food and other household goods from Whole Foods, and products for my miniature dachshund, Blue, come from Amazon.

This is a company that I have always liked as a consumer, but didnt completely get as an investor. Growth for growths sake was what it appeared to be from my perspective. But I have been coming to a different understanding of what Amazon means as an investment.

It really is more of an index of what has been working in the U.S. for cloud computing and goods and services. And the current mess makes it not just more relevant but a necessity. Its proof comes from the sales that keep rolling up for the company on real GAAP terms.

Source: Chart by Bloomberg

Amazon Sales Revenue (GAAP)

I know that my subscribers to my Profitable Investing dont pay to have me tell them about Amazon. But I am recommending buying shares as the company is really a leading index of the evolving U.S. It is fully engaged in benefitting from AI, like my other AI stocks.

Neil George was once an all-star bond trader, but now he works morning and night to steer readers away from traps and into safe, top-performing income investments. Neils new income program is a cash-generating machine one that can help you collect $208 every day the markets open. Neil does not have any holdings in the securities mentioned above.

Read the original post:
Theres Nothing Fake About These 10 Artificial Intelligence Stocks to Buy - InvestorPlace

The Impending Artificial Intelligence Revolution in Healthcare – Op-Ed – HIT Consultant

Harjinder Sandhu, CEO of Saykara

For at least a decade, healthcare luminaries have been predicting the coming AI revolution. In other fields, AI has evolved beyond the hype and has begun to showcase real and transformative applications: autonomous vehicles, fraud detection, personalized shopping, virtual assistants, and so on. The list is long and impressive. But in healthcare, despite the expectations and the tremendous potential in improving the delivery of care, the AI revolution is just getting started. There have been definite advancements in areas such as diagnostic imaging, logistics within healthcare, and speech recognition for documentation. Still, the realm of AI technologies that impact the cost and quality of patient care continues to be rather narrow today.

Why has AI been slow in delivering change in the care processes of healthcare? With a wealth of new AI algorithms and computing power ready to take on new challenges, the limiting function in AIs successful application has been the availability of meaningful data sets to train on. This is surprising to many, given that EHRs were supposed to have solved the data barrier.

The promise of EHRs was that they would create a wealth of actionable data that could be leveraged for better patient care. Unfortunately, this promise never fully materialized. Most of the interesting information that can be captured in the course of patient care either is not or is captured minimally or inconsistently. Often, just enough information is recorded in the EHR to support billing and is in plain text (not actionable) form. Worse, documentation requirements have had a serious impact on physicians, to whom it ultimately fell to input much of that data. Burnout and job dissatisfaction among physicians have become endemic.

EHRs didnt create the documentation challenge. But using an EHR in the exam room can significantly detract from patient care. Speech recognition has come a long way since then, although it hasnt changed that fundamental dynamic of the screen interaction that takes away from the patient. Indeed, using speech recognition, physicians stare at the screen even more intently as they must be mindful of mistakes that the speech recognition system may generate.

Having been involved in the advancement of speech recognition in the healthcare domain and been witness to its successes and failures, I continue to believe that the next stage in the evolution of this technology would be to free physicians from the tyranny of the screen. To evolve from speech recognition systems to AI-based virtual scribes that listen to doctor-patient conversations, creating notes, and entering orders.

Using a human scribe solves a significant part of the problem for physicians scribes relieve the physician of having to enter data manually. For many physicians, a scribe has allowed them to reclaim their work lives (they can focus on patients rather than computers) as well as their personal lives (fewer evening hours completing patient notes). However, the inherent cost of both training and then employing a scribe has led to many efforts to build digital counterparts, AI-based scribes that can replicate the work of a human scribe.

Building an AI scribe is hard. It requires a substantially more sophisticated system than the current generation of speech recognition systems. Interpreting natural language conversation is one of the next major frontiers for AI in any domain. The current generation of virtual assistants, like Alexa and Siri, simplify the challenge by putting boundaries on speech, forcing a user, for example, to express a single idea at a time, within a few seconds and within the boundaries of a list of skills that these systems know how to interpret.

In contrast, an AI system that is listening to doctor-patient conversations must deal with the complexity of human speech and narrative. A patient visit could last five minutes or an hour, the speech involves at least two parties (the doctor and the patient), and a patients visit can meander to irrelevant details and branches that dont necessarily contribute to a physician making their diagnosis.

As a result of the complexity of conversational speech, it is still quite early for fully autonomous AI scribes. In the meantime, augmented AI scribes, AI systems augmented by human power, are filling in the gaps of AI competency and allowing these systems to succeed while incrementally chipping away at the goal of making these systems fully autonomous. These systems are beginning to do more than simply relieve doctors of the burden of documentation, though that is obviously important. The real transformative impact will be from capturing a comprehensive set of data about a patient journey in a structured and consistent fashion and putting that into the medical records, thereby building a base for all other AI applications to come.

About Harjinder Sandhu

Harjinder Sandhu, CEO of Saykara, a company leveraging the power and simplicity of the human voice to make delivering great care easier while streamlining physician workflow

More:
The Impending Artificial Intelligence Revolution in Healthcare - Op-Ed - HIT Consultant

The impact of artificial intelligence on intelligence analysis – Reuters

In the last decade, artificial intelligence (AI) has progressed from near-science fiction to common reality across a range of business applications. In intelligence analysis, AI is already being deployed to label imagery and sort through vast troves of data, helping humans see the signal in the noise. But what the intelligence community is now doing with AI is only a glimpse of what is to come. The future will see smartly deployed AI supercharging analysts ability to extract value from information.

Exploring new possibilities

We expect several new tasks for AI, which will likely fall into one of these three categories:

Delivering new models. The rapid pace of modern decision-making is among the biggest challenges leaders face. AI can add value by helping provide new ways to more quickly and effectively deliver information to decision-makers. Our model suggests that by adopting AI at scale, analysts can spend up to 39 percent more time advising decision-makers.

Developing people. Analysts need to keep abreast of new technologies, new services, and new happenings across the globenot just in annual trainings, but continuously. AI could help bring continuous learning to the widest scale possible by recommending courseware based on analysts work.

Maintaining the tech itself. Beyond just following up on AI-generated leads, organizations will likely also need to maintain AI tools and to validate their outputs so that analysts can have confidence when using them. Much of this validation can be performed as AI tools are designed or training data is selected.

Avoiding pitfalls

Intelligence organizations must be clear about their priorities and how AI fits within their overall strategy. Having clarity about the goals of an AI tool can also help leaders communicate their vision for AI to the workforce and alleviate feelings of mistrust or uncertainty about how the tools will be used.

Intelligence organizations should also avoid investing in empty technologyusing AI without having access to the data it needs to be successful.

Survey results suggest that analysts are most skeptical of AI, compared to technical staff, management, or executives. To overcome this skepticism, management will need to focus on educating the workforce and reconfiguring business processes to seamlessly integrate the tools into workflows. Also, having an interface that allowed the analyst to easily scan the data underpinning a simulated outcome or view a representation of how the model came to its conclusion would go a long way toward that analyst incorporating the technology as part and parcel of his or her workflow.

While having a workforce that lacks confidence in AIs outputs can be a problem, however, the opposite may also turn out to be a critical challenge. With so much data at their disposal, analysts could start implicitly trusting AI, which can be quite dangerous.

But there are promising ways in which AI could help analysts combat human cognitive limitations. They would be very good at continuously conducting key assumptions checks, analyses of competing hypotheses, and quality of information checks.

How to get started today

Across a government agency or organization, successful adoption at scale would require leaders to harmonize strategy, organizational culture, and business processes. If any of those efforts are misaligned, AI tools could be rejected or could fail to create the desired value. Leaders need to be upfront about their goals for AI projects, ensure those goals support overall strategy, and pass that guidance on to technology designers and managers to ensure it is worked into the tools and business processes. Establishing a clear AI strategy can also help organizations frame decisions about what infrastructure and partners are necessary to access the right AI tools for an organization.

Tackling some of the significant nonanalytical challenges analyst teams face could be a palatable way to introduce AI to analysts and build their confidence in it. Today, analysts are inundated with a variety of tasks, each of which demands different skills, background knowledge, and the ability to communicate with decision-makers. For any manager, assigning these tasks across a team of analysts without overloading any one individual or delaying key products can be daunting. AI could help pair the right analyst to the right task so that analysts can work to their strengths more often, allowing work to get done better and more quickly than before.

AI is not coming to intelligence work; it is already there. But the long-term success of AI in the intelligence community depends as much on how the workforce is prepared to receive and use it as any of the 1s and 0s that make it work.

Learn how to assess your AI readiness

Excerpt from:
The impact of artificial intelligence on intelligence analysis - Reuters

Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements – Unite.AI

Julien Rebetez, is the Lead Software & Machine Learning Engineer at Picterra. Picterra provides a geospatial cloud-based-platform specially designed for training deep learning based detectors, quickly and securely.

Without a single line of code and with only few human-made annotations, Picterras users build and deploy unique actionable and ready to use deep learning models.

Itautomates the analysis of satellite and aerial imagery, enabling users to identify objects and patterns.

What is it that attracted you to machine learning and AI?

I started programming because I wanted to make video games and got interested in computer graphics at first. This led me to computer vision, which is kind of the reverse process where instead of having the computer create a fake environment, you have it perceive the real environment. During my studies, I took some Machine Learning courses and I got interested in the computer vision angle of it. I think whats interesting about ML is that its at the intersection between software engineering, algorithms and math and it still feels kind of magical when it works.

Youve been working on using machine learning to analyze satellite image for many years now. What was your first project?

My first exposure to satellite imagery was the Terra-i project (to detect deforestation) and I worked on it during my studies. I was amazed at the amount of freely available satellite data that is produced by the various space agencies (NASA, ESA, etc). You can get regular images of the planet for free every day or so and this is a great resource for many scientific applications.

Could you share more details regarding the Terra-i project?

The Terra-i project (http://terra-i.org/terra-i.html) was started by Professor Andrez Perez-Uribe, from HEIG-VD (Switzerland) and is now led by Louis Reymondin, from CIAT (Colombia). The idea of the project is to detect deforestation using freely available satellite images. At the time, we worked with MODIS imagery (250m pixel resolution) because it provided a uniform and predictable coverage (both spatially and temporally). We would get a measurement for each pixel every few days and from this time series of measurements, you can try to detect anomalies or novelties as we call them in ML sometimes.

This project was very interesting because the amount of data was a challenge at the time and there was also some software engineering involved to make it work on multiple computers and so on. From the ML side, it used Bayesian Neural Network (not very deep at the time ) to predict what the time series of a pixel should look like. If the measurement didnt match the prediction, then we would have an anomaly.

As part of this project, I also worked on cloud removal. We took a traditional signal processing approach there, where you have a time series of measurements and some of them will be completely off because of a cloud. We used a fourier-based approach (HANTS) to clean the time series before detecting novelties in it. One of the difficulties is that if we would clean it too strongly, wed also remove novelties, so there were quite some experiments to do to find the right parameters.

You also designed and implemented a deep learning system for automatic crop type classification from aerial (drone) imagery of farm fields. What were the main challenges at the time?

This was my first real exposure to Deep Learning. At the time, I think the main challenge were more on getting the framework to run and properly use a GPU than on the ML itself. We used Theano, which was one of the ancestors of Tensorflow.

The goal of the project was to classify the type of crop in a field, from drone imagery. We tried an approach where the Deep Learning Model was using color histograms as inputs as opposed to just the raw image. To make this work reasonably quickly, I remember having to implement a custom Theano layer, all the way to some CUDA code. That was a great learning experience at the time and a good way to dig a bit into the technical details of Deep Learning.

Youre officially the Lead Software and Machine Learning Engineer at Picterra. How would you best describe your day to day activities?

It really varies, but a lot of it is about keeping an eye on the overall architecture of the system and the product in general and communicating with the various stakeholders. Although ML is at the core of our business, you quickly realize that most of the time is not spent on ML itself, but all the things around it: data management, infrastructure, UI/UX, prototyping, understanding users, etc This is quite a change from Academia or previous experience in bigger companies where you are much more focused on a specific problem.

Whats interesting about Picterra is that we not only run Deep Learning Models for users, but we actually allow them to train their own. That is different from a lot of the typical ML workflows where you have the ML team train a model and then publish it to production. What this means is that we cannot manually play with the training parameters as you often do. We have to find some training method that will work for all of our users. This led us to create what we call our experiment framework, which is a big repository of datasets that simulates the training data our users would build on the platform. We can then easily test changes to our training methodology against these datasets and evaluate if they help or not. So instead of evaluating a single model, we are more evaluating an architecture + training methodology.

The other challenge is that our users are not ML practitioners, so they dont necessarily know what a training set is, what a label is and so on. Building a UI to allow non-ML practitioners to build datasets and train ML models is a constant challenge and there is a lot of back-and-forth between the UX and ML teams to make sure we guide users in the right direction.

Some of your responsibilities include prototyping new ideas and technologies. What are some of the more interesting projects that you have worked on?

I think the most interesting one at Picterra was the Custom Detector prototype. 1.5 years ago, we had built-in detectors on the platform: those were detectors that we trained ourselves and made accessible to users. For example, we had a building detector, a car detector, etc

This is actually the typical ML workflow: you have some ML engineer develop a model for a specific case and then you serve it to your clients.

But we wanted to do something differently and push the boundaries a bit. So we said: What if we allow users to train their own models directly on the platform ? There were a few challenges to make this work: first, we didnt want this to take multiple hours. If you want to keep this feeling of interactivity, training should take a few minutes at most. Second, we didnt want to require thousands of annotations, which is typically what you need for large Deep Learning models.

So we started with a super simple model, did a bunch of tests in jupyter and then tried to integrate it in our platform and test the whole workflow, with a basic UI and so on. At first, it wasnt working very well in most cases, but there were a few cases where it would work. This gave us hope and we started iterating on the training methodology and the model. After some months, we were able to reach a point where it worked well, and we now have our users using this all the time.

What was interesting about this is the double challenge of keeping the training fast (currently a few minutes) and therefore the model not too complex, but at the same time making it complex enough that it works and solves users problems. On top of that, it works with few (<100) labels for a lot of cases.

We also applied many of Googles Rules of Machine Learning, in particular the ones about implementing the whole pipeline and metrics before starting to optimize the model. It puts you into system thinking mode where you figure out that not all your problems should be handled by the core ML, but some of them can be pushed to the UI, some of them pre/post-processed, etc

What are some of the machine learning technologies that are used at Picterra?

In production, we are currently using Pytorch to train & run our models. We are also using Tensorflow from time to time, for some specific models developed for clients. Other than that, its a pretty standard scientific Python stack (numpy, scipy) with some geospatial libraries (gdal) thrown in.

Can you discuss how Picterra works in the backend once someone uploads images and wishes to train the neural network to properly annotate objects?

Sure, so first when you upload an image, we process it and store it in a Cloud-Optimized-Geotiff (COG) format on our blobstore (Google Cloud Storage), which allows us to quickly access blocks of the image without having to download the whole image later on. This is a key point because geospatial imagery can be huge: we have users routinely working with 5000050000 images.

So then, to train your model, you will have to create your training dataset through our web UI. You will do that by defining 3 types of areas:

Once you have created this dataset, you can simply click Train and well train a detector for you. What happens next is that we enqueue a training job, have one of our GPU worker pick it up (new GPU workers are started automatically if there are many concurrent jobs), train your model, save its weights to the blobstore and finally predict in the testing area to display on the UI. From there, you can iterate over your model. Typically, youll spot some mistakes in testing areas and add training areas to help the model improve.

Once you are happy with the score of your model, you can run it at scale. From the users point of view, this is really simple: just click on Detect next to the image you want to run it on. But its a bit more involved under the hood if the image is large. To speed things up, handle failures and avoid having detections taking multiple hours, we break down large detections in grid cells and run an independent detection job for each cell. This allows us to run very large-scale detections. For example, we had a customer run detection over the whole country of Denmark on 25cm imagery, which is in the range of TB of data for a single project. Weve covered a similar project in this medium post.

Is there anything else that you would like to share about Picterra?

I think whats great about Picterra is that it is a unique product, at the intersection between ML and Geospatial. What differentiates us from other companies that process geospatial data is that we equip our users with a self-serve platform. They can easily find locations, analyze patterns, and detect and count objects on Earth observation imagery. It would be impossible without machine learning, but our users dont even need basic coding skills the platform does the work based on a few human-made annotations. For those who want to go deeper and learn the core concepts of machine learning in the geospatial domain, we have launched a comprehensive online course.

What is also worth mentioning is that possible applications of Picterra are endless detectors built on the platform have been used in city management, precision agriculture, forestry management, humanitarian and disaster risk management, farming, etc., just to name the most common applications. We are basically surprised every day by what our users are trying to do with our platform. You can give it a try and let us know how it worked on social media.

Thank you for the great interview and for sharing with us how powerful Picterra is, readers who wish to learn more should visit the Picterra website.

Originally posted here:
Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements - Unite.AI

Wharton School Receives $5 Million to Launch Artificial Intelligence for Business, Extending Its Commitment to Analytics, Learning, and Engagement -…

Led by AI expert Kartik Hosanagar, AI for Business will explore impact to industries and society

PHILADELPHIA, May 7, 2020 /PRNewswire-PRWeb/ --The Wharton School of the University of Pennsylvania announced today the establishment of Wharton AI for Business (Artificial Intelligence for Business), which will inspire cutting-edge teaching and research in artificial intelligence, while joining with global business leaders to set a course for better understanding of this nascent discipline. The launch of AI for Business is made possible by a new $5 million gift from Tao Zhang, WG'02 and his wife Selina Chin, WG'02, which greatly expands Wharton's analytics capabilities, a major focus of Wharton's More Than Ever campaign.

"The advances made possible by artificial intelligence hold the potential to vastly improve lives and business processes," said Wharton Dean Geoff Garrett. "Our students, faculty, and industry partners are eager to join in our AI knowledge creation efforts to more deeply explore how machine learning will impact the future for everyone. We are deeply grateful to Tao and Selina for so generously enabling us to explore this opportunity and get AI for Business underway."

Operating within Analytics at Wharton and led by faculty member Kartik Hosanagar, John C. Hower Professor of Operations, Information and Decisions, AI for Business will explore AI's applications and impact across industries. Planned activities include:

Professor Hosanagar is renowned for his AI research and instruction. He is the author of the book, A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control and created the first Wharton online courses on AI: Artificial Intelligence for Business. Professor Hosanagar has also founded or advised numerous startups in online marketing and retail including Yodle and Milo.

"Our students and professors are energized by the idea that AI is influencing nearly every aspect of humanity and our efforts to understand it can make a difference for years to come," said Professor Hosanagar. "I'm very excited to help lead AI for Business since the future of machine learning is happening now there are unlimited entry points for experiential learning to explore the topic."

"Selina and I share experience and interest in management, tech, startups, and opportunities for leadership in global business which comes together in AI," said Zhang. "Wharton is the ideal setting for us to enable these experiences for such talented students and renowned faculty. We are proud to be engaged with the School and to be a part of jump starting AI for Business."

Tao Zhang is a Wharton MBA alumnus from the class of 2002. He previously served as co-chairman and co-CEO of Meituan-Dianping, a leading internet company and platform in China. He was the founder and CEO of Dianping.com prior to its merger with Meituan and held positions in American Management Systems, an IT consulting firm. In addition to his generosity toward AI for Business, he has spoken at and supported Wharton Global Forums in Beijing and Shanghai.

Selina Chin is a Wharton MBA alumna from the class of 2002. She served as the China Chief Financial Officer and Vice President of Finance for Goodyear Tires & Rubber Co. She currently runs the Blue Hill Foundation based out of Singapore.

About the Wharton School

Founded in 1881 as the world's first collegiate business school, the Wharton School of the University of Pennsylvania is shaping the future of business by incubating ideas, driving insights, and creating leaders who change the world. With a faculty of more than 235 renowned professors, Wharton has 5,000 undergraduate, MBA, executive MBA, and doctoral students. Each year 18,000 professionals from around the world advance their careers through Wharton Executive Education's individual, company-customized, and online programs. More than 99,000 Wharton alumni form a powerful global network of leaders who transform business every day. For more information, visit http://www.wharton.upenn.edu.

###

SOURCE The Wharton School

Excerpt from:
Wharton School Receives $5 Million to Launch Artificial Intelligence for Business, Extending Its Commitment to Analytics, Learning, and Engagement -...

Intuality Inc.’s Artificial Intelligence Making Accurate Predictions of Coronavirus Cases and Deaths – PRNewswire

WINSTON SALEM, N.C., May 5, 2020 /PRNewswire/ --Grant Renier, Chairman of IntualityInc., and Dr. Howard Rankin have been presenting, during weekly YouTube podcasts, the results of the system's cases and deaths for each of 120 days into the future since March, for the USA, Canada, UK, and 5 major EU countries.IntualityAI is tracking and predicting in real-time 500+ countries and governmental districts worldwide, as a free public service during this world-wide crisis.

"The numbers have been pretty accurate so far," says Grant Renier.So, what does IntualityAI predict about the future?

"We see a slight flattening of the curve by early July, but a second spike appearing in August. The system predicts the cumulative number of deaths in the US up to 103,000 by August 24," Grant continued.

Similar patterns are charted for the UK and Canada. By August 24, the system predicts a cumulative total of 6,800 deaths in Canada, and slightly over 38,000 deaths in the UK.

IntualityAI, the behavioral economics-based technology, has had success in forecasting in money markets, elections, sports, health and technology applications. It is the product of more than 30 years of research and development.

Dr. Howard Rankin, an expert in cognitive bias and author of "I Think Therefore I Am Wrong: A Guide to Bias, Political Correctness, Fake News and the Future of Mankind," along with Mr. Renier, hasbeen running IntualityAI podcasts related to COVID-19 atleast once a week.Accessthem on YouTube under"IntualityAI"and onitswebsite at http://www.intualityai.com.

Contact: GrantRenierPhone: 207.370.1330Email: [emailprotected]

Alt Contact: Dr. Howard RankinPhone: 843.247.2980Email: [emailprotected]

Related Images

intualityai-covid-19-prediction.png IntualityAI COVID-19 Prediction Accuracy AI prediction engine continues to predict daily COVID-19 cases and deaths within 2% of actual, since April 10, 2020.

Related Links

Company website

SOURCE Intuality Inc

IntualityAI

Read more here:
Intuality Inc.'s Artificial Intelligence Making Accurate Predictions of Coronavirus Cases and Deaths - PRNewswire

FTC Provides Guidance on Using Artificial Intelligence and Algorithms – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

Continued here:
FTC Provides Guidance on Using Artificial Intelligence and Algorithms - JD Supra

Is artificial intelligence the answer to the care sector amid COVID-19? – Descrier

It is clear that the health and social care sectors in the United Kingdom have long been suffering from systematic neglect, and this has predictably resulted in dramatic workforce shortages. These shortages have been exacerbated by the current coronavirus crisis, and will be further compounded by the stricter immigration rules coming into force in January 2021. The Home Office is reportedly considering an unexpected solution to this; replacing staff with tech and artificial intelligence.

To paraphrase Aneurin Bevan, the mark of a civilised society is how it treats its sick and vulnerable. As a result, whenever technology is broached in healthcare, people are sceptical particularly if it means removing that all-important human touch.

Such fears are certainly justified. Technology and AI itself has become fraught with issues: there is a wealth of evidence that points to prove algorithms can become susceptible to absorbing the unconscious human biases of its designers, particularly around gender and race. Even the Home Office has been found using discriminatory algorithms that scan and evaluate visa applications while a similar algorithm utilised in hospitals in the US was found to be systematically discriminating against black people as the software was more likely to refer white patients to care programmes.

Such prejudices clearly present AI as unfit in healthcare. Indeed, technology is by no means a quick fix to staff shortages and should never be used at the expense of human interaction, especially in areas that are as emotionally intensive as care.

However, this does not mean that the introduction of AI into the UK care sector is necessarily a slippery slope to a techno-dystopia. Robotics have already made vital changes in the healthcare sector; surgical robots, breast cancer scanners and algorithms that can detect even the early stages of Alzheimers have proved revolutionary. The coronavirus crisis itself has reinforced just how much we rely on technology as we are able to keep in touch with our loved ones and work from home.

Yet in a more dramatic example of the potential help AI could deliver in the UK, robots have been utilised to disinfect the streets of China amid the coronavirus pandemic and one hospital at the centre of the outbreak in Wuhan outnumbered its doctor workforce with robotic aides to slow the spread of infection.

Evidently, if used correctly, AI and automation could improve care and ease the burden on staff in the UK. The Institute for Public Policy Research even calculated that 30% of work done by adult social care staff could be automated, saving the sector 6 billion. It is important to stress, though, that this initiative cannot be used as a cost cutting exercise if money is saved by automation, it should be put back into the care sector to improve both the wellbeing of those receiving care, and also the working conditions of the carers themselves.

There is much that care robots cannot do, but they can provide some level of companionship, and can serve as assistance with medication prep while smart speakers can remind or alert patients. AI can realistically monitor vulnerable patients safety 24/7 while allowing them to maintain their privacy and sense of independence.

There are examples of tech being used in social care around the world that demonstrate the positive effect that it can have; in Japan specifically, they have implemented the use of a robot called Robear that helps carry patients from their bed to their wheelchairs, a bionic suit called HAL that assists with motor tasks, and Paro a baby harp seal bot that is a therapeutic companion which has been shown to alleviate anxiety and depression in dementia sufferers. Another, a humanoid called Pepper, has been introduced as an entertainer, cleaner and corridor monitor to great success.

It is vital, though, that if automation and AI is to be introduced on a wide scale into the care sector, it must work in harmony with human caregivers. It could transform the care sector for the better if used properly, however the current government does not view it in this way; and the focus on automation is ushered in to coincide with the immigration rules that will prohibit migrant carers from entry. Rolling out care robots across the nation on such a huge scale in the next 9 months is mere blue sky thinking; replacing the fresh-and-blood and hard graft of staff with robots is therefore far-fetched at best, but disastrous to a sector that is suffering under a 110,000 staff shortage at worst. Besides, robots still disappointingly lack the empathy required for the job and simply cannot give the personal, compassionate touch that is so important; they can only ease the burden on carers, and cannot step in their shoes alone.

While in the long term it is possible that automation in the care sector could help ease the burden on staff, and plug gaps as an when it is needed, the best course of action that is currently attainable in order to solve the care crisis is for the government to reconsider just who it classifies as low skilled in relation to immigration as some Conservative MPs have already made overtures towards.

In order to remedy the failing care sector, the government should invest both in home grown talent and relax restrictions on carers from overseas seeking to work in the country. A renovation of the care sector is needed; higher wages, more reasonable hours, more secure contracts, and the introduction of a care worker visa is what is so desperately needed, and if this is implemented in conjunction with support from AI and automation we could see the growing and vibrant care sector for which this country is crying out.

Continued here:
Is artificial intelligence the answer to the care sector amid COVID-19? - Descrier