3 ways Artificial Intelligence Will Help IT MSPs Do Better in 2021 – Channel Futures

Artificial intelligence and machine learning can help make ITSM processes more efficient.

CIOs are now using artificial intelligence (AI) and machine learning (ML) technologies to make IT service management processes more efficient.

A typical use case for artificial intelligence in ITSM involves natural language processing (NLP). User requests for IT services are automated using NLP. IT practitioners get a deeper understanding of their processes by applying machine learning (ML) to ITSM data. The natural language processing technology that powers virtual agents is very often integrated with channels that the employees are familiar with. Many organizations integrate virtual agents with chat services like Slack, where employees can directly communicate with the IT service desks.

ITSM systems generate large volumes of data, so applying machine learning to these systems makes sense. The data collected by these systems is large not only in volume but also in detail. All of this data helps us understand existing IT assets and processes, along with information about their ownership.

These insights help IT understand the real priorities of ITSM issues, work proactively instead of reactively, accelerate time to resolution and enhance employee productivity. In the current age of remote work, enhancing employee experience to ensure business continuity is at the top of every CIOs mind, and artificial intelligence will prove to be just the right technology to use to face this new challenge.

Lets look at the three ways artificial intelligence will help IT MSPs to do better in 2021.

Chatbots integrated with an ITSM environment can easily be used to categorize the problem in employee requests. For example, if an organization has integrated Freshservices Virtual Agent with MS Teams, it creates a channel for employees to raise a service request or resolve their issues. The chat interface is a familiar UI for the employees, and the chatbot will identify whether the employee has a service request or an incident to raise using machine learning.

Another important and time-consuming task normally performed by an agent or complex workflows is routing a ticket to the correct support groups. Chatbots will triage the incoming requests or incidents to the right support group, making the process a lot efficient.

The historical ticket data and an extensive ITSM knowledge base will help agents resolve various requests faster. However, this requires the admins/agents to create an extensive knowledge base covering a wide range of requests and incidents. The ability to directly convert a resolution email to a knowledge base article will help build a rich knowledge base repository. When a similar problem arises, AI and machine learning can be used to dig through this repository and present the closest match to resolve the issue faster.

A well-managed repository will also help with incident resolution throughout the solution. AI can provide advice that is as simple as a related or similar incident along with its history, or a solution article with words that match the current incident/request, thereby shortening the time taken to think through the issue from scratch.

Like employee onboarding, many requests to IT demand human staff hours to perform a series of complex tasks to fulfill the requests. Machine learning models watch and learn how humans carry on and execute them to automate them in the future. By recognizing patterns in the request types and execution methods, machine learning-based models make intelligent suggestions for even the most complex IT processes.

Hemalakshmi is a Product Expert with Freshworks. Her responsibility includes educating and helping industry peers and customers on best practices, tips and tricks, quick guides, and solutions around IT Service Management and its various use cases. In her 6+ years of experience in the core SaaS business applications serving as a product expert, Hema has worked with multiple businesses in helping them with their business needs and setting up their service desk solution Freshservice.Follow her onLinkedIn.

This guest blog is part of a Channel Futures sponsorship.

Read the original here:
3 ways Artificial Intelligence Will Help IT MSPs Do Better in 2021 - Channel Futures

Procore Delivers Artificial Intelligence to Unlock Insights from Construction Data – PRNewswire

Procore CEO and founder, Tooey Courtemanche, said, "With our AI-powered insights and industry benchmark data, Procore users can make better, real-time decisions, improve their construction processes, and mitigate risk. The ways Procore is applying automation and machine learning is making Procore users fundamentally better at their jobs."

Key initiatives and product updates are below.

Acquisition of Avata Intelligence

Procore is committed to investing in next-generation analytics products and approaches that will create new ways of interacting with data. As part of this commitment, Procore acquired Avata Intelligence, an AI firm, and brought world-class AI talent into Procore earlier this year.

This acquisition has expanded Procore's AI functionality and improved processes throughout the construction lifecycle. For example, Procore is further investing in ways to automate repetitive laborious tasks, uncover hidden information, and provide actionable insights to drive better outcomes for its users.

Search Everything from One Place with Procore Search

The documents, photos, and communications that exist within a given project in Procore are a valuable bank of information for project stakeholders. With the new feature Procore Search, Procore has improved how this project information is structured and accessed.

Procore Search offers a single and consistent search experience that uses machine learning to quickly and accurately return search results across an entire project, regardless of what Procore tool is being used when a search is conducted. This search functionality is possible because the Procore platform is built on a singular, integrated system.

"This is a game changer in how we search for and find information throughout the different elements of our project management tools," said Matt Redman, Del Amo Construction. "It brings contextual search to a full project scale, and has allowed our people to find and connect information that would have been extremely difficult to correlate before."

Enhanced search is one more way Procore is redefining the expectations of a modern construction management solution. Procore Search is available to all Procore users.

A Smarter Way to Build Submittals

Procore Submittal Builder automatically generates a submittal register within minutes by scanning every page of a specification book. This form of automation saves Procore customers valuable time as they are no longer required to read and manually create submittals. Procore customers have reported saving 5-7 days of work per project with Submittal Builder, and there is room for automation to further improve the submittal process and save even more time.

Procore is using machine learning to improve submittal recognition on specifications to make the process of creating submittals easier and more efficient for customers. Procore is also applying machine learning models that improve the accuracy and speed at which submittals can be generated. This technology works behind the scenes so customers can spend more time focusing on other critical aspects of a job.

Customized Reporting Capabilities with Procore Analytics

Procore Analytics gives customers an easy way to analyze their data from Procore, as well as data from their integrated tech stack, in one central location. Procore Analytics was created alongside construction professionals, ensuring that its reporting meets the specific needs of the construction industry. Recent additions to Procore Analytics' reporting capabilities include:

Industry Benchmarking

"For years construction companies have lacked meaningful, reliable industry benchmarks against which they can measure their own performance," said Kris Lengieza, Senior Director, Business Development at Procore. "As the industry shifts towards being more insight driven, the importance of external benchmarks is increasing."

Procore is leveraging machine learning to identify objective industry and project level benchmarks for its customers. The insights these benchmarks provide will help customers prioritize their daily activities while optimizing for project success. Procore is continuing to expand its benchmarking capabilities, and invites customers interested to see how their performance stacks up against the industry to request an invitation to join the benchmarking initiative.

Find out more about how Procore is advancing the use of AI in construction, while providing greater access to insights from construction data by attending Groundbreak on Oct 27-28, 2020.

About ProcoreProcore is a leading provider of construction management software. Over 1 million projects and more than $1 trillion USD in construction volume have run on Procore's platform. Our platform connects every project stakeholder to solutions we've built specifically for the construction industryfor the owner, the general contractor, and the specialty contractor. Procore's App Marketplace has a multitude of partner solutions that integrate seamlessly with our platform, giving construction professionals the freedom to connect with what works best for them. Headquartered in Carpinteria, California, Procore has offices around the globe. Learn more at Procore.com.

CONTACT: [emailprotected]

SOURCE Procore Technologies, Inc.

http://www.procore.com

Read the rest here:
Procore Delivers Artificial Intelligence to Unlock Insights from Construction Data - PRNewswire

Re-Humanizing Fundraising With Artificial Intelligence – Stanford Social Innovation Review

(Photo by iStock/xijian)

Conventional wisdom about nonprofit fundraising considers these two statements equally true: 1) Acquiring new donors loses money, and 2) Future gifts from new donors make up for the money lost on acquisition.

Alas, only one of them is true. Acquiring new donors does, indeed, lose money, often estimated at 50 percent of the initial gift. However, according to Blackbaud, fewer than a quarter of those initial donors will renew their gift. The math gets even worse in out-years, as 60 percent of donors lapse year after year.

The reality is that most organizations spend an enormous amount of time frantically trying to refill their leaky bucket of donors. The result is a transactional approach to fundraising that requires constantly asking for donations rather than spending time getting to know donors, particularly donors who arent writing huge checks. Just because its the norm, however, doesnt make it good or effective, particularly during a pandemic when everyone is distracted, scared, and stretched.

We recently released a report funded by the Bill and Melinda Gates Foundation on using artificial intelligence (AI) for fundraising and philanthropy. The report outlines ways that nonprofits are beginning to use AI to increase giving, and while the fact that the most powerful technology in history can help nonprofits raise more money didnt surprise us, we were surprised by how much opportunity nonprofits have to use AI to re-imagine and re-humanize fundraising.

AI automates tasks that previously only humans could do. The field isnt newits been around for decadesbut its recently become much less expensive, making it available for everyday use and by smaller organizations.

AI tools for increasing fundraising currently include:

One example of a nonprofit putting AI tools into action is the 24-hour fundraising marathon Extra Life, a fundraising effort of Childrens Miracle Network Hospitals. Staff members were getting overwhelmed answering the same question from Canadian supporters, who wanted to know what currency Extra Life would use to process their donations. To ameliorate this, Extra Life added a chatbot to its donation page specifically to answer this question, and even used an algorithm to personalize the landing page so that the chatbot appeared only for Canadian donors.

The chatbot on the Extra Life website provides instant answers to common donor questions about things like conversion rates.

Another example is the Cure Alzheimer's Fund, which raised $1.2 million in donations using Gravytys AI-powered fundraising software. Gravyty drafts emails to existing donors based on their preferences and previous actions, and highlights donors who are on the cusp of lapsing. Staff members review the emails and cultivation plan, then send them out the door. Gravyty isnt just automating renewal letters; by helping fundraisers continuously improve the specific content and timing of messages to individual donors, its adding more intelligence into the fundraising system.

Similarly, Rainforest Action Network piloted software from Accessible Intelligence Limited in May 2020. This software recommends the right content to include in fundraising appeals (including writing style, specific ask, and even subject lines to test), as well as the right number and interval of communication touch points. As a result, open rates and signed online petitions increased significantly. More importantly, conversion of one-time donors to monthly donors increased 866 percent. (That is not a typo!)

Since the publication of our report, weve been thinking about the time development staff could save by using AI. What could change? What could staff do differently or better with this precious gift of time? We see a great opportunity for development teams to patch the holes in the leaky bucket of fundraising, and enable their organizations to move from transactional to relational fundraising, starting with these three activities:

1.Add retention rates to dashboards and budgetary calculations. We have served on many boards and cant recall one discussion focused on donor retention. Organizations need to measure and monitor donor retention rates over time. They also need to calculate the net cost of fundraising, as well as the cost of money raised through acquisition and lost through lapsed donors over time.

2.Put time for conversations with donors, clients, and volunteers on the calendar. Activities that arent on the calendar dont get done. Staff and leading volunteers (such as board members) need to spend time listening to donors and stop treating them like ATMs. Instead, they need to find out why the cause is important to each donornot just major donors, but donors at every leveland what makes them feel good or bad when they give.

3.Establish ethical-use guidelines around the use of AI. Its critically important that organizations use the incredible power of AI with great care. We recommend establishing an outside committee of advisors to discuss issues such as the use and storage of data, the need to inform people when they are talking to a robot and not a person, and careful monitoring of AI-powered efforts for racial and other biases.

One person we interviewed for our report said, AI cant fix bad fundraising practices. Our greatest fear is that nonprofit leaders will use the incredible speed and power of AI to supersize existing transactional fundraising practices. We implore them to take the care and time needed to create a new chapter in fundraising, where every person can be heard and where most donors stay with causes for years, not months.

More:
Re-Humanizing Fundraising With Artificial Intelligence - Stanford Social Innovation Review

USPTO Releases Benchmark Study on the Artificial Intelligence Patent Landscape – IPWatchdog.com

The diffusion trend for artificial intelligence inventor-patentees started at 1% in 1976 and increased to 25% in 2018, which means that 25% of all unique inventor-patentees in 2018 used AI technologies in their granted patents.

On October 27, the United States Patent and Trademark Office (USPTO) released a report titled Inventing AI: Tracing the diffusion of artificial intelligence with U.S. patents. The study showed that artificial intelligence (AI) patent applications increased by more than 100% between 2002 and 2018, from 30,000 to over 60,000, and the overall share of patent applications containing AI subject matter rose from 9% to nearly 16%.

According to the U.S. National Institute of Standards and Technology (NIST), AI technologies and systems comprise software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action. However, for purposes of patent applications and grants, the USPTO defines AI as including one or more of eight component technologies: vision, planning/control, knowledge processing, speech, AI hardware, evolutionary computation, natural language processing, and machine learning. Between the years of 1990 and 2018, the largest AI technological areas were planning/control and knowledge processing, which include inventions directed to controlling systems, developing plans, and processing information. In addition, the study showed that patent applications in the areas of machine learning and computer visions have shown a pronounced increase since 2012.

The study explained that, since 1976, AI technologies have been diffusing across a large percentage of technology subclasses, spreading from 10% in 1976 to more than 42% of all patent technology subclasses in 2018. The study identified three distinct clusters with different diffusion rates in order from the fastest to the slowest growing: 1.) knowledge processing and planning/control, 2.) vision, machine learning, and AI hardware, 3.) revolutionary computing, speech, and natural language processing. The study noted that the clusters suggest a form of technological interdependence among the AI component technologies, but also noted that additional research is required to understand the factors behind the patterns.

The study also identified the growth in the number of AI inventors as an indicator of diffusion. In particular, the diffusion trend for inventor-patentees started at 1% in 1976 and increased to 25% in 2018, which means that 25% of all unique inventor-patentees in 2018 used AI technologies in their granted patents.

Noting that AI requires specialized knowledge, the study pointed out that diffusion is generally slower and can be restricted to a narrow set of organizations in areas where skilled labor and technical information are harder to obtain, such as in AI. The study identified the top 30 U.S. companies that held 29% of all AI patents granted from 1976 to 2018. The leading company was IBM Corp. with 46,752 patents, followed by Microsoft Corp. with 22,067 patents and Google Inc. with 10,928 patents.

With respect to geographic diffusion of AI, the study indicated that, between 1976 and 2000, AI inventor-patentees tended to be concentrated in larger cities or established technology hubs, such as Silicon Valley, California, because those regions were home to companies with employees having the specialized knowledge required to understand AI technologies. Since 2001, AI inventor-patentees have diffused widely across the U.S. For example, Maine and South Carolina are active in digital data processing and data processing adapted for businesses, Oregon is active in fitness training and equipment, and Montana is active in inventions analyzing the chemical and physical properties of materials. The study also showed that the American Midwest is adopting AI technology, but at a slower rate. For example, Wisconsin leads in medical instruments and processes for diagnosis, surgery, and identification and Iowa, Kansas, Missouri, Nebraska, and Ohio are contributing to AI technologies relating to telephonic communications. Further, inventor-patentees in North Dakota are actively contributing to AI technologies as applied to agriculture.

The USPTO noted that the study suggests that AI has the potential to be as revolutionary as electricity or the semiconductor and depends, at least in part, on the ability of innovators and firms to successfully incorporate AI inventions into existing and new products, processes, and services.

The report results were obtained from a machine learning AI algorithm that determined the volume, nature, and evolution of AI and its component technologies as contained in U.S. patents from 1976 through 2018. This methodology improved the accuracy of identifying AI patents by better capturing the diffusion of AI across technology, companies, inventor-patentees, and geography.

Read the original here:
USPTO Releases Benchmark Study on the Artificial Intelligence Patent Landscape - IPWatchdog.com

Artificial Intelligence Is Used To Understand The Geospatial World To Improve Business And Governmental Performance – Forbes

Artificial Intelligence

Recently, there was brief news about Microsoft Flight Simulator and a tower more than 200 stories tall created by a typo. As funny as that was, it missed the larger picture. Google Earth started a trend that has continued, and the virtualization of the world has proceeded at a rapid pace. It is now to the point where real business benefit is being gained by such work, supporting the application of artificial intelligence (AI) to even more problems.

There have been smaller discussions about virtualization and augmented reality, to analysis and improve performance in stores and other smaller spaces. However, similar to how a drive for better gaming led to NVIDIAs GPUs, which helped advance AI, the business of capturing a global image base to improve gaming can now help AI lend its skills to new areas.

Whats interesting is that the volume of geographic imaging is beginning to provide analytics to a wide range of businesses. Both businesses and governments are beginning to use the images to estimate forest conditions, crop yields, and other large scale issues. In another interesting application, analysis of buildings and other large structures is beginning to yield ROI on inspections.

One example is inspection of a type of structure called a floating oil tank. It is, as the name implies, an oil storage tank. Whats interesting is the roof floats on top of the oil, raising and lowering based on oil level. The Blackshark.ai system, which includes 200 GPUs, works with satellite imagery, the time stamp of the image, and the shadows provided by the facilities. It is then simple trigonometry to provide an estimate of oil volume. Note, this is something that is good for government oil reserve estimates and for insurance, but companies would want more detailed information.

In that example, the AI is in the computer vision component, it isnt required for the estimate creation. However, there are examples where AI can be used for additional analysis. Imagine a government trying to estimate energy usage or tax base depending on building type. A satellite image can be analyzed by an AI system which can identify building types by what is on the roof. The size of HVAC systems, for instance, can help to identify a buildings size and use type.

The image is the starting point for the analysis, said Michael Putz, CEO and Co-founder, Blackshark.ai. Semantic reconstruction is the process of adding semantic information needed for critical decisions by companies, governments, and individuals. Past computer vision systems have only enhanced images, leaving it to people to clarify items. Artificial intelligence can do the work of identifying objects, adding the semantics necessary to speed analysis and enhance the accuracy of decision making.

In the aftermath of events such as earthquakes, floods, and other natural disaster, comparisons to previous images can quickly prepare both governments, NGOs, and insurance companies in taking both faster and more effective action.

Rendering 2D images into 3D simulations also provide other business benefits. Consider wireless signal propagation. 3G and 5G have different broadcast features. Simulating geospatial features can aid coverage range and engineering cost analysis for optimal ROI for tower placement.

Notice the individuals mentioned Michael Putz. Think about semantic analysis and someones back yard. As AI is able to identify objects and even render 2D satellite images into 3D representations, that enhances the ability for homeowners and small businesses to work together to combine AI and VR to plan for changes. The example Mr. Putz provided was adding a pool to a yard. Being able to visualize that in 3D could help owners check line of site and see if other work, such as higher fences for privacy, might be needed.

At this point, I see the technology being focused on the higher end solutions, such as those for large companies and for government agencies. As with all new product arenas, advances will drive price down and the Cloud model will mean consumer applications will become profitable just not yet.

Geospatial image capture started off small, but has now grown to a massive scale. The addition of AI both improves computer vision and downstream analysis. This is another are where the world around us is being enhanced by artificial intelligence.

The rest is here:
Artificial Intelligence Is Used To Understand The Geospatial World To Improve Business And Governmental Performance - Forbes

Neocova Launches Groundswell AI, an Artificial Intelligence-Powered BSA/AML and Fraud Detection Solution for Community Financial Institutions – Yahoo…

Enables community banks and credit unions to improve detection, classification of BSA/AML cases and decrease case resolution time while reducing cost

ST. LOUIS, MO / ACCESSWIRE / October 28, 2020 / Neocova, the St. Louis-based technology provider offering fully secure AI and cloud-first banking products including core, analytics, fraud, and regulatory compliance for community banks and credit unions, today announced the launch of Groundswell AI. Groundswell AI is a Bank Secrecy Act/anti-money laundering (BSA/AML) and fraud detection solution that improves both detection and classification of BSA/AML cases, decreases case resolution time, and reduces compliance function-related costs.

This newly launched technology joins Neocova's comprehensive suite of products and services for community financial institutions, which includes Fineuron, a fully secure, cloud-native, and open API enterprise technology platform; Spotlight AI, a best-in-class data analytics and visualization tool; and Ambios, a complete bank core replacement. Each product service model is designed by bankers for bankers with a focus on white-glove service aligned to SLAs, a dedicated customer success manager, and complementary solutions' knowledge bases and training hubs.

Neocova's Groundswell AI launch comes amid a growing focus on BSA/AML compliance among regulators and the banking community following last month's FinCEN Files leak, which called to light $2 trillion in suspicious transactions over the course of several years. For bank executives, Groundswell AI generates continuous visibility into bank functions that often only emerge during a regulatory or audit exam, shifting compliance from a source of concern and risk to a source of confidence.

"With both the volume and sophistication of fraudulent activity skyrocketing in the financial industry, especially in the face of the ongoing digital transformations driven in part by the COVID-19 pandemic, community financial institutions need more powerful tools in their arsenal," said Neocova's Co-founder and CEO, Sultan Meghji. "Groundswell AI embodies the work smarter, not harder' mindset that all community financial institutions need to embrace to survive. Equally important, regulators are now welcoming these AI-enabled solutions due to their incredible speed, accuracy, and reliability when compared with manual processes."

Story continues

Groundswell AI reduces false-positive rates by leveraging a multi-layered, multi-technology approach that uses smart rules, as well as both shallow and deep learning. Groundswell AI's self-service capabilities allow compliance executives to modify existing rules or create new ones through a Transaction Monitoring Control Panel. It also offers end-to-end case management from alert notification to Suspicious Activity Reports (SAR) filing, which is automated with API integration to FinCEN.

"As a former and longtime community bank CEO, I understand the tremendous pressures faced by today's banking leaders who are dealing with compressed margins, rising compliance costs, and a new breed of competitors," said Lee Keith, President of Banking Services at Neocova. "This latest tool from Neocova gives community financial institutions a clear pathway to creating efficiencies and lowering the cost of the compliance process, while at the same time shifting valuable resources to revenue-generating functions within the institution."

The launch of Groundswell AI comes shortly after Neocova announced industry veteran Matt Beecher as president, where he oversees the rapid adoption of Neocova's products as community banks and credit unions ramp up digital transformation. To learn more, visit neocova.com.

About Neocova

Neocova is a fast-growing, St. Louis-based financial technology firm with operations in New York. The company offers artificial intelligence, analytics, and other cloud-based systems that enable financial institutions to operate more efficiently, effectively, and securely by removing the stresses of managing complex systems and complicated contracts. More information is available at https://neocova.com/.

Media Contact:Michelle MeadCaliber Corporate Advisers888.550.6385 ext.7michelle@calibercorporate.com

SOURCE: Neocova

View source version on accesswire.com: https://www.accesswire.com/612664/Neocova-Launches-Groundswell-AI-an-Artificial-Intelligence-Powered-BSAAML-and-Fraud-Detection-Solution-for-Community-Financial-Institutions

Read more:
Neocova Launches Groundswell AI, an Artificial Intelligence-Powered BSA/AML and Fraud Detection Solution for Community Financial Institutions - Yahoo...

Artificial intelligence continues to mature with o9 and Project44 partnership – DC Velocity

Increasing capabilities in artificial intelligence (AI) could help supply chain control towers live up to the promise that supporters have long touted, according to o9 Solutions, a planning and operations platform vendor that today said it has partnered with supply chain visibility provider project44.

The linkage is the latest sign that AI is emerging from the computer lab to gain traction in everyday applications, ranging from rolling warehouse robots and self-driving forklifts to supply chain planning software and digital freight matching platforms to flying DC drones and piece-picking robots.

Under terms of the new deal, project44 will provide a data signal to the o9 planning platforms control tower to identify disruptions in the global transportation network. Dallas-based o9 will then generate actionable insights and pass them on to the partners joint customers, enabling them to implement agile planning and scenario management to mitigate risks and drive delivery excellence within the supply chain, the companies said.

Global supply chains continue to experience immense pressure from constant market disruption. By joining forces with the global leader in advanced visibility, we will help our customers gain better control of their increasingly complex and fractured supply chains, o9 CEO Chakri Gottemukkala said in a release.

The integration comes as supply chain control towers have evolved rapidly in recent years, increasing their ability to identify opportunities and respond to shifts in demand and supply, according to o9.Despite that progress, the control towers cannot create visibility without access to the highest quality data across all modes and geographies, including predictive tracking, real-time estimated time of arrivals (ETAs), and comprehensive historical data, the company said.

In search of that precise logistics data, o9 made a similar match last month to access FourKites real-time in-transit freight tracking information, likewise saying move would allow customers to receive proactive notifications of freight disruptions and reduce friction in complex global supply chains.

Chicago-based project44 says it can also create that high-quality data by applying machine learning (ML) techniques to incorporate the context of each trip and comprehensive historical data, including historical carrier data, local weather and road regulations, driver hours of service, and dock hours.

Project44s partnership with o9 combines game-changing digital powers that will help our mutual customers make faster, more effective decisions and, as a result, build more resilient supply chains, project44 Founder and CEO Jett McCandless said in a release. With project44s high-quality contextualized data and o9s AI-powered planning capabilities, we will unlock the value of supply chain transparency and predictability in an entirely new way.

Go here to see the original:
Artificial intelligence continues to mature with o9 and Project44 partnership - DC Velocity

Leap And Learn: The Common Thread Of Artificial Intelligence Success Stories – Forbes

AI success is built on learning

Enterprises seeing real success with artificial intelligence have something in common: they are capable of learning quickly from their successes or failures and re-applying those lessons into the mainstream of their businesses.

Of course, theres nothing new about the ability to rinse, learn and repeat, which has been a fundamental tenet of business success for ages. But because AI is all about real-time, nanosecond responsiveness to a range of things, from machines to markets, the ability to leap and learn at a blinding pace has taken on a new urgency.

At this moment, only 10% of companies are seeing financial benefits from their AI initiatives, a survey of 3,000 executives conducted by Boston Consulting Group and MIT Sloan Management Review finds. There is a lot of AI going around: more than half, 57%, piloting or deploying AI up from 46% in 2017. In addition, at least 70% understand the business value proposition of AI. But financial results have been elusive.

So, what are the enlightened 10% doing to finally realize actual, tangible gains from AI? They do all the right things, of course, but theres an extra piece of the magic thrown in. For instance, scaling AI seen as the path to enterprise adoption has only limited value by itself. Adding the ability to embed AI into processes and solutions improves the likelihood of significant benefits dramatically, but only to 39%, the survey shows.

Successful AI adopters have figured out how to learn from their AI experiences and apply them in forward-looking ways to their businesses, the survey reports authors, led by Sam Ransbotham, conclude. Our survey analysis demonstrates that leaders share one outstanding feature they intend to become more adept learners with AI. This ability to learn and understand the potential and pitfalls of AI enable them to sense and respond quickly and appropriately to changing conditions, such as a new competitor or a worldwide pandemic, are more likely to take advantage of those disruptions.

In other words, they give executives and employees the space they need to better understand, adjust and adapt to AI-driven processes and figure out their roles in making it all work. Automation is not thrust upon them with no preparation or training. Realizing significant financial benefits with AI requires far more than a foundation in data, infrastructure, and talent, the researchers state. Even embedding AI in business processes is not enough.

Those organizations that lead the way with AI success pursue the following strategies:

They facilitate systematic and continuous learning between humans and machines. Organizational learning with AI isnt just machines learning autonomously. Or humans teaching machines. Or machines teaching humans, Ransbotham and his co-authors state. Its all three. Organizations that enable humans and machines to continuously learn from each other with all three methods are five times more likely to realize significant financial benefits than organizations that learn with a single method.

They develop multiple ways for humans and machines to interact. Deploying the appropriate interaction modes in the appropriate context is critical, the co-authors state. For example, some situations may require an AI system to make a recommendation and humans to decide whether to implement it. Some context-rich environments may require humans to generate solutions and AI to evaluate the quality of those solutions.

They change to learn, and learn to change. Successful initiatives dont just change processes to use AI; they change processes in response to what they learn with AI.

AI has great potential to expand our visions of where and how businesses can deliver greater service in the months and years ahead. But it requires more than simply installing new systems and processes and waiting to see the results. Its a continuous process of improvement and innovation,

Read the original post:
Leap And Learn: The Common Thread Of Artificial Intelligence Success Stories - Forbes

Artificial Intelligence Search Technology Will be Used to Help Modernize US Federal Pathology Facility – Benzinga

Technology developed by Canadian researchers has been adopted by the Joint Pathology Center (JPC), which has the world's largest collection of preserved human tissue samples. The v=center will use an artificial intelligence (AI) search engine to index and search its digital archive as part of a modernization effort.The image search engine was designed by researchers at the Laboratory for Knowledge Inference in Medical Image Analysis (Kimia Lab) at the University of Waterloo.

Waterloo, Canada, October 28, 2020 --(PR.com)-- Technology developed by Canadian researchers has been adopted by a major pathology facility in the United States.

The Joint Pathology Center (JPC), which has the worlds largest collection of preserved human tissue samples, will use an artificial intelligence (AI) search engine to index and search its digital archive as part of a modernization effort.

The image search engine was designed by researchers at the Laboratory for Knowledge Inference in Medical Image Analysis (KIMIA Lab) at the University Waterloo. It is commercialized under the name Lagotto (TM).

The image retrieval technology, with the scientific name Yottixel, allows pathologists, researchers and educators to search large archives of digital images to tap into rich diagnostic data.

Yottixel will be used to enhance biomedical research for infectious diseases and cancer, enabling easier data sharing to facilitate collaboration and medical advances.

The JPC is the leading pathology reference centre for the US federal government and part of the US Defense Health Agency. In the last century, it has collected more than 55 million glass slides and 35 million tissue block samples. Its data spans every major epidemic and pandemic, and was used to sequence the Spanish flu virus of 1918. It is expected that the modernization also helps to better understand and fight the COVID-19 pandemic.

We are delighted to see that our algorithms are about to explore the worlds largest digital archive of biopsy samples, Professor Hamid Tizhoosh, the Director of Kimia Lab, says, we will continue to design and commercialize novel AI solutions for the medical field. The opportunity comes with unprecedented challenges that need fresh ideas and established competency to fully exploit the big data for the diagnostic imaging, precision medicine, and drug discovery of the future.

Researchers at Waterloo have obtained promising diagnostic results using their AI search technology to match digital images of tissue samples in suspected cancer cases with known cases in a database. In a paper published earlier this year, a validation project led by Kimia Lab achieved accurate diagnoses for 32 kinds of cancer in 25 organs and body parts.

We showed Yottixel can get incredibly encouraging results if it has access to a large archive, said Hamid Tizhoosh. Image search is undoubtedly a platform to intelligently exploit big image data by exploring medical repositories.

About Kimia Lab

The Laboratory for Knowledge Inference in Medical Image Analysis (Kimia Lab) is a research group hosted at the Faculty of Engineering, University of Waterloo, On, Canada. Kimia Lab, established in 2013, is a member of Waterloo Artificial Intelligence Institute and conducts research at the forefront of mass image data in medical archives using machine learning schemes. The lab trains graduate and undergraduate students and annually hosts international visiting scholars. Professor Hamid Tizhoosh, Kimia Lab's director, is an expert in medical image analysis who has been working on different aspects of artificial intelligence since 1993. He is a faculty affiliate to the Vector Institute, Toronto, Canada.

Contact Information:Kimia LabHamid Tizhoosh519-888-4567Contact via Emailhttp://kimia.uwaterloo.ca/

Read the full story here: https://www.pr.com/press-release/824263

Press Release Distributed by PR.com

Read the original here:
Artificial Intelligence Search Technology Will be Used to Help Modernize US Federal Pathology Facility - Benzinga

Investing in Artificial Intelligence (AI) – Everything You Need to Know – Securities.io

Artificial Intelligence (AI) is a field that requires no introduction. AI has ridden the tailcoats of Moores Law which states that the speed and capability of computers can be expected to double every two years. Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a doubling every 3 to 4 months, with the end result that the amount of computing resources allocated to AI has grown by 300,000x since 2012. No other industry can compare with these growth statistics.

We will explore what fields of AI are leading this acceleration, what companies are best positioned to take advantage of this growth, and why it matters.

Machine learning is a subfield of AI which is essentially programming machines to learn. There are multiple types of machine learning algorithms, the most popular by far is deep learning, this involves feeding data into an Artificial Neural Network (ANN). An ANN is a very compute intensive network of mathematical functions joined together in a format inspired by the neural networks found in the human brain.

The more big data that is fed into an ANN, the more precise the ANN becomes. For example, if you are attempting to train an ANN to learn how to identify cat pictures, if you feed the network 1000 cat pictures the network will have a small level of accuracy of perhaps 70%, if you increase it to 10000 pictures, the level of accuracy may increase to 80%, if you increase it by 100000 pictures, then you have just increased the accuracy of the network to 90%, and onwards.

Herein lies one of the opportunities, companies that dominate the field of AI chip development are naturally ripe for growth.

There are many other types of machine learning that show promise, such as reinforcement learning, this is training an agent through the repetition of actions and associated rewards. By using reinforcement learning an AI system can compete against itself with the intention of improving how well it performs. For example, a program playing chess will play against itself repeatedly, with every instance of the gameplay improving how it performs in the next game.

Currently the best types of AI use a combination of both deep learning and reinforcement learning in what is commonly referred to as deep reinforcement learning. All of the leading AI companies in the world such as Tesla use some type of deep reinforcement learning.

While there are other types of important machine learning systems that are currently being advanced such as meta-learning, for the sake of simplicity deep learning and its more advanced cousin deep reinforcement learning are what investors should be most familiar with. The companies that are at the forefront of this technological advancement will be best positioned to take advantage of the huge exponential growth we are witnessing in AI.

If there is one differentiator between companies that will succeed, and become market leaders, and companies that will fail, it is big data. All types of machine learning are heavily reliant on data science, this is best described as a process of understanding the world from patterns in data. In this case the AI is learning from data, and the more data the more accurate the results. There are some exceptions to this rule due to what is called overfitting, but this is a concern that AI developers are aware of and take precautions to compensate for.

The importance of big data is why companies such as Tesla have a clear market advantage when it comes to autonomous vehicle technology. Every single Tesla that is in motion and using auto-pilot is feeding data into the cloud. This enables Tesla to use deep reinforcement learning, and other algorithm tweaks in order to improve the overall autonomous vehicle system.

This is also why companies such as Google will be so difficult for challengers to dethrone. Every day that goes by is a day that Google collects data from its myriad of products and services, this includes search results, Google Adsense, Android mobile device, the Chrome web browser, and even the Nest thermostat. Google is drowning is more data than any other company in the world. This is not even counting all of the moonshots they are involved in.

By understanding why deep learning and data science matters, we can ten infer why the companies below are so powerful.

There are three current market leaders that are going to be very difficult to challenge.

Alphabet Inc is the umbrella company for all Google products which includes the Google search engine. A short history lesson is necessary to explain why they are such a market leader in AI. In 2010, a British company DeepMind was launched with the goal of applying various machine learning techniques towards building general-purpose learning algorithms.

In 2013, DeepMind took the world by storm with various accomplishments including becoming world champion at seven Atari games by using deep reinforcement learning.

In 2014, Google acquired DeepMind for $500 Million, shortly thereafter in 2015 DeepMinds AlphaGo became the first AI program to defeat a professional human Go player, and the first program to defeat a Go world champion. For those who are unfamiliar Go is considered by many to be the most challenging game in existence.

DeepMind is currently considered a market leader in deep reinforcement learning, and Artificial General Intelligence (AGI), a futuristic type of AI with the goal of eventually achieving or surpassing human level intelligence.

We still need to factor in the other other types of AI that Google is currently involved in such as Waymo, a market leader in automonous vehicle technology, second only to Tesla, and the secretive AI systems currently used in the Google search engine.

Google is currently involved in so many levels of AI, that it would take an exhaustive paper to cover them all.

As previously stated Tesla is taking advantage of big data from its fleet of on-road vehicles to collect data from its auto-pilot. The more data that is collected the more it can improve using deep reinforcement, this is especially important for what are deemed as edge cases, this is known as scenarios that dont happen frequently in real-life.

For example, it is impossible to predict and program in every type of scenario that may happen on the road, such as a suitcase rolling into traffic, or a plane falling from the sky. In this case there is very little specific data, and the system needs to associate data from many different scenarios. This is another advantage of having a huge amount of data, while it may be the first time a Tesla in Houston encounters a scenario, it is possible that a Tesla in Dubai may have encountered something similar.

Tesla is also a market leader in battery technology, and in electric technology for vehicles. Both of these rely on AI systems to optimize the range of a vehicle before a recharge is required. Tesla is known for its frequent on-air updates with AI optimizations that improve by a few percentage points the performance and range of its vehicle fleet.

As if this was not sufficient, Tesla is also designing its own AI chips, this means it is no longer reliant on third-party chips, and they can optimize chips to work with their full self-driving software from the ground up.

NVIDIA is the company best positioned to take advantage of the current rise in demand in GPU (Graphics processing unit) chips, as they are currently responsible for 80% of all GPUsales.

While GPUs were initially used for video games, they were quickly adopted by the AI industry specifically for deep learning. The reason GPUs are so important is that the speed of AI computations is greatly enhanced when computations are carried out in parallel. While training a deep learning ANN, inputs are required and this depends heavily on matrix multiplications, where parallelism is important.

NVIDIA is constantly releasing new AI chips that are optimized for different use cases and requirements of AI researchers. It is this constant pressure to innovate that is maintaining NVIDIA as a market leader.

It is impossible to list all of the companies that are involved in some form of AI, what is important is understanding the machine learning technologies that are responsible for most of the innovation and growth that the industry has witnessed. We have highlighted 3 market leaders, many more will come along. To keep abreast of AI, you should stay current with AI news, avoid AI hype, and understand that this field is constantly evolving.

Follow this link:
Investing in Artificial Intelligence (AI) - Everything You Need to Know - Securities.io