Coronavirus will finally give artificial intelligence its moment – San Antonio Express-News

For years, artificial intelligence seemed on the cusp of becoming the next big thing in technology - but the reality never matched the hype. Now, the changes caused by the covid-19 pandemic may mean AI's moment is finally upon us.

Over the past couple of months, many technology executives have shared a refrain: Companies need to rejigger their operations for a remote-working world. That's why they have dramatically increased their spending on powerful cloud-computing technologies and migrated more of their work and communications online.

With fewer people in the office, these changes will certainly help companies run more nimbly and reliably. But the centralization of more corporate data in the cloud is also precisely what's needed for companies to develop the AI capabilities - from better predictive algorithms to increased robotic automation - we've been hearing about for so long. If business leaders invest aggressively in the right areas, it could be a pivotal moment for the future of innovation.

To understand all the fuss around artificial intelligence, some quick background might be useful: AI is based on computer science research that looks at how to imitate the workings of human intelligence. It uses powerful algorithms that digest large amounts of data to identify patterns. These can be used to anticipate, say, what consumers will buy next or offer other important insights. Machine learning - essentially, algorithms that can improve at recognizing patterns on their own, without being explicitly programmed to do so - is one subset of AI that can enable applications like providing real-time protection against fraudulent financial transactions.

Historically, AI hasn't fully lived up to its hype. We're still a ways off from being able to have natural, life-like conversations with a computer, or getting truly safe self-driving cars. Even when it comes to improving less advanced algorithms, researchers have struggled with limited datasets and a lack of scaleable computing power.

Still, Silicon Valley's AI-startup ecosystem has been vibrant. Crunchbase says there are 5,751 private-held AI companies in the U.S. and that the industry received $17.4 billion in new funding last year. International Data Corporation (IDC) recently forecast that global AI spending will rise to $96.3 billion in 2023 from $38.4 billion in 2019. A Gartner survey of chief information officers and IT leaders, conducted in February, found that enterprises are projecting to double their number of AI projects, with over 40% planning to deploy at least one by the end of 2020.

As the pandemic accelerates the need for AI, these estimates will most likely prove to be understated. Big Tech has already demonstrated how useful AI can be in fighting covid-19. For instance, Amazon.com partnered with researchers to identify vulnerable populations and act as an "early warning" system for future outbreaks. BlueDot, an Amazon Web Services startup customer, used machine learning to sift through massive amounts of online data and anticipate the spread of the virus in China.

Pandemic lockdowns have also affected consumer behavior in ways that will spur AI's growth and development. Take a look at the soaring e-commerce industry: As consumers buy more online to avoid the new risks of shopping in stores, they are giving sellers more data on preferences and shopping habits. Bank of America's internal card-spending data for e-commerce points to rising year-over-year revenue growth rates of 13% for January, 17% for February, 24% for March, 73% for April and 80% for May. The data these transactions generate is a goldmine for retailers and AI companies, allowing them to improve the algorithms that provide personalized recommendations and generate more sales.

The growth in online activity also makes a compelling case for the adoption of virtual customer-service agents. International Business Machines Corporation estimates that only about 20% of companies use such AI-powered technology today. But they predict that almost all enterprises will adopt it in the coming years. By allowing computers to handle the easier questions, human representatives can focus on the more difficult interactions, thereby improving customer service and satisfaction.

Another area of opportunity comes from the increase in remote working. As companies struggle with the challenge of bringing employees back to the office, they may be more receptive to AI-based process automation software, which can handle mundane tasks like data entry. Its ability to read invoices and update databases without human intervention can reduce the need for some types of office work while also improving its accuracy. UiPath, Automation Anywhere and Blue Prism are the three leading vendors in this space, according to Goldman Sachs, accounting for about 36% of the roughly $850 million market last year. More imaginative AI projects are on the horizon. Graphics semiconductor-maker NVIDIA Corporation and luxury automaker BMW Group recently announced a deal where AI-powered logistics robots will be used to manufacture customized vehicles. In mid-May, Facebook said it was working on an AI lifestyle assistant that can recommend clothes or pick out furniture based on your personal taste and the configuration of your room.

As with the mass adoption of any new technology, there will be winners and losers. Among the winners, cloud-computing vendors will thrive as they capture more and more data. According to IDC, Amazon Web Services was number one in infrastructure cloud-computing services, with a 47% market share last year, followed by Microsoft at 13%.

But NVIDIA may be at an even better intersection of cloud and AI tech right now: Its graphic chip technology, once used primarily for video games, has morphed into the preeminent platform for AI applications. NVIDIA also makes the most powerful graphic processing units, so it dominates the AI-chip market used by cloud-computing companies. And it recently launched new data center chips that use its next-generation "Ampere" architecture, providing developers with a step-function increase in machine-learning capabilities.

On the other hand, the legacy vendors that provide computing equipment and software for in-office environments are most at risk of losing out in this technological shift. This category includes server sellers like Hewlett Packard Enterprise Company and router-maker Cisco Systems, Inc.

We must not ignore the more insidious consequences of an AI renaissance, either. There are a lot of ethical hurdles and complications ahead involving job loss, privacy and bias. Any increased automation may lead to job reductions, as software and robots replace tasks performed by humans. As more data becomes centrally stored on the cloud, the risk of larger data breaches will increase. Top-notch security has to become another key area of focus for technology and business executives. They also need to be vigilant in preventing algorithms from discriminating against minority groups, starting with monitoring their current technology and compiling more accurate datasets.

But the upside of greater computing power, better business insights and cost efficiencies from AI is too big to ignore. So long as companies proceed responsibly, years from now, the advances in AI catalyzed by the coronavirus crisis may be one of the silver linings we remember from 2020.

- - -

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Kim is a Bloomberg Opinion columnist covering technology.

See the original post:
Coronavirus will finally give artificial intelligence its moment - San Antonio Express-News

Apple using machine learning for almost everything, and privacy-first approach actually better – 9to5Mac

Apples artificial intelligence (AI) chief says that Apple is using machine learning in almost every aspect of how we interact with our devices, but there is much more to come.

John Giannandrea says he moved from Google to Apple because the potential of machine learning (ML) to impact peoples lives is so much greater at the Cupertino company

Giannandrea spoke with ArsTechnicas Samuel Axon, outlining how Apple uses ML now.

Theres a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff weve released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where were not using machine learning.

Its hard to find a part of the experience where youre not doing some predicative [work]. Like, app predictions, or keyboard predictions, or modern smartphone cameras do a ton of machine learning behind the scenes to figure out what they call saliency, which is like, whats the most important part of the picture? Or, if you imagine doing blurring of the background, youre doing portrait mode []

Savvy iPhone owners might also notice that machine learning is behind the Photos apps ability to automatically sort pictures into pre-made galleries, or to accurately give you photos of a friend named Jane when her name is entered into the apps search field []

Most [augmented reality] features are made possible thanks to machine learning []

Borchers also pointed out accessibility features as important examples. They are fundamentally made available and possible because of this, he said. Things like the sound detection capability, which is game-changing for that particular community, is possible because of the investments over time and the capabilities that are built in []

All of these things benefit from the core machine learning features that are built into the core Apple platform. So, its almost like, Find me something where were not using machine learning.

He was, though, surprised at areas where Apple had not been using ML before he joined the company.

When I joined Apple, I was already an iPad user, and I loved the Pencil, Giannandrea (who goes by J.G. to colleagues) told me. So, I would track down the software teams and I would say, Okay, wheres the machine learning team thats working on handwriting? And I couldnt find it.It turned out the team he was looking for didnt exista surprise, he said, given that machine learning is one of the best tools available for the feature today.

I knew that there was so much machine learning that Apple should do that it was surprising that not everything was actually being done.

That has changed, and will continue to change, however.

That has changed dramatically in the last two to three years, he said. I really honestly think theres not a corner of iOS or Apple experiences that will not be transformed by machine learning over the coming few years.

Its long been thought that Apples privacy focus wanting to do everything on the device, and not analyzing huge volumes of personal data means that it cant compete with Google, because it cant benefit from masses of data pulled from millions of users. Giannandrea says this is absolutely not the case.

I understand this perception of bigger models in data centers somehow are more accurate, but its actually wrong. Its actually technically wrong. Its better to run the model close to the data, rather than moving the data around.

In other words, you get better results when an ML model learns from your usage of your device than when it relies on aggregated data from millions of users. Local processing can also be used in situations where it simply wouldnt be realistic to send data to a server, like choosing the exact moment to act on you pressing the Camera app shutter release button for the best frame.

Understandably, Giannandrea wouldnt be drawn on what Apple is working on now, but did give one example of what might be possible when you combine the power of Apple Silicon Macs with machine learning.

Imagine a video editor where you had a search box and you could say, Find me the pizza on the table. And it would just scrub to that frame.

The whole piece is very much worth reading.

Photo: WFMJ

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

Read more:
Apple using machine learning for almost everything, and privacy-first approach actually better - 9to5Mac

Get over 65 hours of Big Data and Machine Learning training for less than $40 – Boing Boing

Even in horrible economic times, a few simple rules hold unshakably true. And one of those rules is that if you possess an in-demand skill, youll always find work and often, at a top market salary, to boot.

If you understand Big Data and how to find order from the chaos of massive stockpiles of raw information, you can land a six-figure salary. Even now. And if you know how to program machines to think and respond for themselves, youre in an even better position to make a very comfortable living.

If youre unsure about your career future or just want to change your tax bracket, the training in The Complete 2020 Big Data and Machine Learning Bundle can hand you everything you need to start down the path toward life as a Big Data analyst or machine learning engineer.

Across 10 courses hosting almost 70 hours of content, this instruction explains the ins and outs of these exploding job fields, even for those who have no experience with statistics or advanced technology.

Half of the courses here look deeply into the process of using big data, the vast amounts of structured and unstructured information that most businesses collect on a daily basis. Of course, youll never get on top of that tidal wave with your eyes and a ream of spreadsheets, so these courses examine the key analytical tools and language data experts use to organize findings and extract mining for all that unprocessed data.

The training covers industry-leading processes and software like Scala, Hadoop, Elasticsearch, MapReduce and Apache Spark, all valuable means to unlock the secrets hidden inside that mountain of numbers.

The other half of the coursework focuses on machine learning as the Machine Learning for Absolute Beginners - Level 1 course offers newbies a real understanding of what machine learning, artificial intelligence, and deep learning really mean.

Helping computers determine how to assess information and adjust their behavior on their own isnt science fiction. Training in learning the Python coding language at the heart of these fields as well as how to use tools like Tensorflow and Keras not only make it all relatable but can put you in a position to get hired as a machine learning expert with the paycheck to match.

This course package usually retails for almost $1,300, but your path to a new career in Big Data and machine learning can start now for a whole lot less, only $39.90.

Do you have your stay-at-home essentials? Here are some you may have missed.

Amazons new Chinese thermal spycam vendor was blacklisted by U.S. over allegations it helped China detain and monitor Uighurs and other Muslim minorities

Mark Di Stefano of the Financial Times is accused by The Independent of accessing private Zoom meetings held by The Independent and The Evening Standard as journalists were learning how coronavirus restrictions would affect them.

Hackers tried to break into the World Health Organization earlier in March, as the COVID-19 pandemic spread, Reuters reports. Security experts blame an advanced cyber-espionage hacker group known as DarkHotel. A senior agency official says the WHO has been facing a more than two-fold increase in cyberattacks since the coronavirus pandemic began.

Look, with everything going on right now, theres a good chance you might have missed some of the cool products offered up over the past few days, all at healthy savings off their original price. Wed feel like we were doing you a disservice if we didnt give you one last shot at em. To []

When you think of the single program that seems to absolutely epitomize business in all its forms, you probably think of Microsoft Excel. Its been around for three decades, its the cornerstone of the ubiquitous Microsoft Office suite and that neat, ordered grid of a spreadsheet is synonymous with 21st-century commerce. While many have Excel []

Youd think the biggest complaint that can be leveled about a pair of earbuds is that they just dont sound all that great. Granted, there are plenty of cut-rate headphones that fall under that category, but wed wager the pet peeve that makes most users throw away earbuds in frustration is when they just dont []

Read the original:
Get over 65 hours of Big Data and Machine Learning training for less than $40 - Boing Boing

Elon Musk-backed OpenAI to release text tool it called dangerous – The Guardian

OpenAI, the machine learning nonprofit co-founded by Elon Musk, has released its first commercial product: a rentable version of a text generation tool the organisation once deemed too dangerous to release.

Dubbed simply the API, the new service lets businesses directly access the most powerful version of GPT-3, OpenAIs general purpose text generation AI.

The tool is already a more than capable writer. Feeding an earlier version of the opening line of George Orwells Nineteen Eighty-Four It was a bright cold day in April, and the clocks were striking thirteen the system recognises the vaguely futuristic tone and the novelistic style, and continues with: I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.

Now, OpenAI wants to put the same power to more commercial uses such as coding and data entry. For instance, if, rather than Orwell, the prompt is a list of the names of six companies and the stock tickers and foundation dates of two of them, the system will finish it by filling in the missing details for the other companies.

It will mark the first commercial uses of a technology which stunned the industry in February 2019 when OpenAI first revealed its progress in teaching a computer to read and write. The group was so impressed by the capability of its new creation that it was initially wary of publishing the full version, warning that it could be misused for ends the nonprofit had not foreseen.

We need to perform experimentation to find out what they can and cant do, said Jack Clark, the groups head of policy, at the time. If you cant anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.

Now, that fear has lessened somewhat, with almost a year of GPT-2 being available to the public. Still, the company says: The fields pace of progress means that there are frequently surprising new applications of AI, both positive and negative.

We will terminate API access for obviously harmful use-cases, such as harassment, spam, radicalisation, or astroturfing [masking who is behind a message]. But we also know we cant anticipate all of the possible consequences of this technology, so we are launching today in a private beta [test version] rather than general availability.

OpenAI was founded with a $1bn (0.8bn) endowment in 2015, backed by Musk and others, to advance digital intelligence in the way that is most likely to benefit humanity. Musk has since left the board, but remains as a donor.

Follow this link:
Elon Musk-backed OpenAI to release text tool it called dangerous - The Guardian

Machine Learning Takes The Embarrassment Out Of Videoconference Wardrobe Malfunctions – Hackaday

Telecommuters: tired of the constant embarrassment of showing up to video conferences wearing nothing but your underwear? Save the humiliation and all those pesky trips down to HR with Safe Meeting, the new system that uses the power of artificial intelligence to turn off your camera if you forget that casual Friday isnt supposed to be that casual.

The following infomercial is brought to you by [Nick Bild], who says the whole thing is tongue-in-cheek but we sense a certain degree of necessity is the mother of invention here. Its true that the sudden throng of remote-work newbies certainly increases the chance of videoconference mishaps and the resulting mortification, so whatever the impetus, Safe Meeting seems like a great idea. It uses a Pi cam connected to a Jetson Nano to capture images of you during videoconferences, which are conducted over another camera. The stream is classified by a convolutional neural net (CNN) that determines whether it can see your underwear. If it can, it makes a REST API call to the conferencing app to turn off the camera. The video below shows it in action, and that it douses the camera quickly enough to spare your modesty.

We shudder to think about how [Nick] developed an underwear-specific training set, but we applaud him for doing so and coming up with a neat application for machine learning. Hes been doing some fun work in this space lately, from monitoring where surfaces have been touched to a 6502-based gesture recognition system.

Go here to see the original:
Machine Learning Takes The Embarrassment Out Of Videoconference Wardrobe Malfunctions - Hackaday

This App Uses Machine Learning to Detect and Remove Edits from Images – Beebom

Since the evolution of social media platforms like Instagram, Snapchat, and Facebook, we have seen a lot of beauty apps pop-up in the market. Even most of the budget phones of recent times come with beauty filters baked in the default camera app. All these apps apply a ton of filters to your pictures and that can be very annoying. Well, now theres an app that can not only detect, but also remove edits from an image.

Created by Redditor, Akshat Jagga (u/chancemehmu), Mirage is an app that can detect and remove edits by image-editing software on an existing image. According to the Haryana-based developer, the app uses machine learning to perform the tasks.

As shown in a video (below) posted by the developer on Reddit, the app works even on screenshots of pictures.

I recently made an app that uses Machine Learning to detect & undo photoshopped/edited images! Looking for feedback on Mirage. from iphone

You can feed the app an image or a screenshot. It will then analyze it for a few seconds, and show two versions of the image. These will show the original input image, and the image with highlights around areas that have been edited. This can be seen in the video as well.

The next screen shows a similar view. Only in this one, the picture on the right shows the original image, before all the effects and filters. This is the one the app makes after removing the edits from the highlighted areas.

Now, we do not have any idea what kind of machine learning algorithm the app is using to detect and remove the edits of the images. However, it definitely looks interesting.

The app is available on both the App Store and the Play Store. While the App Store version costs $1.99, on the Play Store you can get it for free. However, bear in mind that you will need to subscribe to the app before you are able to use it, which is really annoying.

Download Mirage (Android, iOS)

Original post:
This App Uses Machine Learning to Detect and Remove Edits from Images - Beebom

Ninety One, Inc. Partners with the Multi-Scale Robotics Lab at ETH Zurich to Advance Robotic Surgery Through Machine Learning and Artificial…

ZURICH--(BUSINESS WIRE)--The Multi-Scale Robotics Lab (MSRL) at ETH Zurich and Ninety One, Inc have partnered to advance Precision Medicine and Surgical Robotics through advanced Artificial Intelligence and Machine Learning. Ninety One has five priority areas that will be core to our near and long-term growth and that will define the future of Digital Health; Personalized Patient Care, Precision Diagnostics, Robotic Surgery, Image Guided Therapy, and Connected Care Delivery. We are proactively teaming up centers of innovation globally to identify ways to improve patient outcomes, quality of care delivery, and cost productivity, all centered around the Quadruple Aim in medicine. We are delighted to have the opportunity to work with Prof. Bradley Nelson, Christophe Chautems and their medical robotics team at ETH Zurich. said Bleron Baraliu CEO Ninety One, Inc.

The combination of the remote magnetic navigation systems designed at MSRL with machine learning algorithms will open new opportunities to improve the outcome of multiple medical procedures, said Christophe Chautems, Group Leader Medical Robotics at ETH Zurich.

About ETH Zurich

The Multi-Scale Robotics Lab (MSRL) at ETH Zurich pursues a dynamic research program that maintains a strong robotics research focus on several emerging areas of science and technology. A major component of the MSRL research leverages advanced robotics for creating minimally invasive devices for medical application. These devices are controlled with a Magnetic Navigation System that generates a magnetic field in the 3D space. Such systems are used to generate magnetic torques and forces on permanent magnets, or soft magnetic materials embedded on tethered robots such as catheters, or untethered microrobots.

For more information visit https://msrl.ethz.ch/

About Ninety One

Ninety One, a privately-held data science and software technology company -- with their newly-released software platform -- has redefined the model for CIED Remote Monitoring. Ninety One couples latest mathematical advances in data science with state-of-the-art technologies to digitize, analyze, and prioritize data from implantable cardiac devices, wearables, and beyond. Ninety One is focusing on clinical advancement in predictive analytics and Precision Medicine, and has established key, exclusive partnerships with leading research and healthcare institutions in the United States, Europe, and Asia.

For more information visit https://www.91.life

Read this article:
Ninety One, Inc. Partners with the Multi-Scale Robotics Lab at ETH Zurich to Advance Robotic Surgery Through Machine Learning and Artificial...

Growing Adoption of AI and Machine Learning and Increased Use of Drones is Driving Growth in the Global Mining Ventilation Systems Market -…

DUBLIN--(BUSINESS WIRE)--The "Global Mining Ventilation Systems Market 2020-2024" report has been added to ResearchAndMarkets.com's offering.

The mining ventilation systems market is poised to grow by $ 81.73 mn during 2020-2024 progressing at a CAGR of 4% during the forecast period. This report on the mining ventilation systems market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as key vendor analysis.

The market is driven by the growing demand for safety in underground mining and demand for minerals. In addition, increasing demand for precious metals is anticipated to boost the growth of the market as well. This study identifies technological advances as one of the prime reasons driving the mining ventilation systems market growth during the next few years. Also, the growing adoption of AI and machine learning and increasing use of drones will lead to sizable demand in the market.

The mining ventilation systems market analysis includes product segment and geographic landscapes

The mining ventilation systems market covers the following areas:

This robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading mining ventilation systems market vendors that include ABB Ltd., ABC Canada Technology Group Ltd., ABC Industries Inc., Epiroc AB, Howden Group Ltd., New York Blower Co., Sibenergomash-BKZ LLC, Stantec Inc., TLT-Turbo GmbH, and Zitron SA. Also, the mining ventilation systems market analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage on all forthcoming growth opportunities.

The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

This study presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research - both primary and secondary.

The market research report provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast an accurate market growth.

Key Topics Covered:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by Product

Customer landscape

Geographic Landscape

Vendor Landscape

Vendor Analysis

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/yl3xpg

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Read more from the original source:
Growing Adoption of AI and Machine Learning and Increased Use of Drones is Driving Growth in the Global Mining Ventilation Systems Market -...

Four projects receive funding from University of Alabama CyberSeed program – Alabama NewsCenter

Four promising research projects received funding from the University of Alabama CyberSeed program, part of the UA Office for Research and Economic Development.

The pilot seed-funding program promotes research across disciplines on campus while ensuring a stimulating and well-managed environment for high-quality research.

The funded projects come from four major thrusts of the UA Cyber Initiative that include cybersecurity, critical infrastructure protection, applied machine learning and artificial intelligence, and cyberinfrastructure.

These projects are innovative in their approach to using cutting-edge solutions to tackle critical challenges, said Dr. Jeffrey Carver, professor of computer science and chair of the UA Cyber Initiative.

One project will study cybersecurity of drones and develop strategies to mitigate potential attacks. Led by Dr. Mithat Kisacikoglu, assistant professor of electrical and computer engineering, and Dr. Travis Atkison, assistant professor of computer science, the research will produce a plan for the secure design of the power electronics in drones with potential for other applications.

Another project will use machine learning to probe the nature of dark matter using existing data from NASA. The work should position the research team, led by Dr. Sergei Gleyzer, assistant professor of physics and astronomy, and Dr. Brendan Ames, assistant professor of mathematics, to analyze images expected later this year from the Vera Rubin Observatory, the worlds largest digital camera.

The CyberSeed program is also funding work planning to use machine learning to accelerate discovery of candidates within a new class of alloys that can be used in real-world experiments. These new alloys, called high-entropy alloys or multi-principal component alloys, are thought to enhance mechanical performance. This project involves Drs. Lin Li and Feng Yan, assistant professors of metallurgical and materials engineering, and Dr. Jiaqi Gong, who begins as associate professor of computer science this month.

A team of researchers is involved in a project to use state-of-the-art cyberinfrastructure technology and hardware to collect, visualize, analyze and disseminate hydrological information. The research aims to produce a proof-of-concept system. The team includes Dr. Sagy Cohen, associate professor of geography; Dr. Brad Peter, a postdoctoral researcher of geography; Dr. Hamid Moradkhani, director of the UA Center for Complex Hydrosystems; Dr. Zhe Jiang, assistant professor of computer science; Dr. D. Jay Cervino, executive director of the UA Office of Information Technology; and Dr. Andrew Molthan with NASA.

The CyberSeed program came from a process that began in April 2019 with the first internal UA cybersummit to meet and define future opportunities. In July, ORED led an internal search for the chair of the Cyber Initiative,announcing Carver in August. In October, Carver led the second internal cybersummit, at which it was agreed the Cyber Initiative would define major thrusts and develop the CyberSeed program.

While concentrating in these areas specifically, the Cyber Initiative will continue to interact with other researchers across campus to identify other promising cyber-related research areas to grow the portfolio, Carver said.

This story originally appeared on the University of Alabamas website.

Follow this link:
Four projects receive funding from University of Alabama CyberSeed program - Alabama NewsCenter

Bitglass Integrates CrowdStrike’s Machine-Learning Technology to Provide Zero-Day Advanced Threat Protection in the Cloud – Business Wire

CAMPBELL, Calif.--(BUSINESS WIRE)--Bitglass, the Next-Gen Cloud Security Company, announced today that it has partnered with CrowdStrike, a leader in cloud-delivered endpoint protection, to provide an agentless advanced threat protection (ATP) solution that identifies and remediates both known and zero-day threats on any cloud application or service, as well as any device that accesses corporate IT resources (including personal devices).

Cloud applications and bring your own device (BYOD) policies offer organizations enhanced flexibility and efficiency, but they can also serve as proliferation points for malware if not properly secured. This Original Equipment Manufacturer (OEM) offering from CrowdStrike uses machine learning (ML) and deep file inspection to identify malware and other threats. Together with Bitglass Next-Gen Cloud Access Security Broker (CASB), threats are automatically remediated based on preset policies.

Bitglass CASB leverages agentless inline proxies to monitor and mediate traffic between cloud applications and devices in order to enforce granular security policies on data in transit. By incorporating CrowdStrikes detection capabilities directly into Bitglass agentless proxy, the integration can identify and block malware in real time as infected files are uploaded to cloud applications or downloaded onto devices (even personal devices) --without the need for software installations. Additionally, integration with application programming interfaces (APIs) allows for the detection and quarantining of malware already at rest in the cloud.

Once malware makes its way into a cloud app, it can quickly spread into connected apps as well as into users devices, said Anurag Kahol, chief technology officer and co-founder of Bitglass. Consequently, organizations need a multi-faceted solution that can automatically block malware both at rest and in transit. If they wait for IT teams to review and respond to threat notifications, its often too late. Were proud to leverage CrowdStrikes industry-leading technology to deliver a robust cloud ATP solution that stops threats and empowers enterprises to embrace the cloud applications and BYOD policies that spur innovation and productivity.

As a cloud-delivered endpoint protection leader at the forefront of securing organizations from sophisticated tactics, CrowdStrike understands that a successful security strategy lies in the ability to quickly detect, respond and remediate threat activity, said Dr. Sven Krasser, CrowdStrikes chief scientist. By incorporating our machine learning file-scan engine, which is trained leveraging the 3 trillion endpoint-related events processed weekly by the Falcon Platform, with Bitglass unique, agentless architecture, customers gain comprehensive, real-time protection and control over corporate data across all endpoints with reduced risk of exposure.

The solution is fully deployed in the cloud and is completely agentless--requiring no hardware appliances or software installations and ensuring rapid deployment. Additionally, Bitglass Polyscale Architecture scales and adapts to an enterprise's exact needs on the fly. There is no need for backhauling or bottleneck architectures.

For more information, download the joint solution brief here:https://pages.bitglass.com/CD-FY20Q2-CrowdstrikeBitglassSolutionsBrief_LP.html?&utm_source=pr

About Bitglass

Bitglass, the Next-Gen Cloud Security company, is based in Silicon Valley with offices worldwide. The company's cloud security solutions deliver zero-day, agentless, data and threat protection for any app, any device, anywhere. Bitglass is backed by Tier 1 investors and was founded in 2013 by a team of industry veterans with a proven track record of innovation and execution.

Visit link:
Bitglass Integrates CrowdStrike's Machine-Learning Technology to Provide Zero-Day Advanced Threat Protection in the Cloud - Business Wire

Millions of historic newspaper images get the machine learning treatment at the Library of Congress – TechCrunch

Historians interested in the way events and people were chronicled in the old days once had to sort through card catalogs for old papers, then microfiche scans, then digital listings but modern advances can index them down to each individual word and photo. A new effort from the Library of Congress has digitized and organized photos and illustrations from centuries of news using state of the art machine learning.

Led by Ben Lee, a researcher from the University of Washington occupying the Librarys Innovator in Residence position, the Newspaper Navigator collects and surfaces data from images from some 16 million pages of newspapers throughout American history.

Lee and his colleagues were inspired by work already being done in Chronicling America, an ongoing digitization effort for old newspapers and other such print materials. While that work used optical character recognition to scan the contents of all the papers, there was also a crowdsourced project in which people identified and outlined images for further analysis. Volunteers drew boxes around images relating to World War I, then transcribed the captions and categorized the picture.

This limited effort set the team thinking.

I loved it because it emphasized the visual nature of the pages seeing the visual diversity of the content coming out of the project, I just thought it was so cool, and I wondered what it would be like to chronicle content like this from all over America, Lee told TechCrunch.

He also realized that what the volunteers had created was in fact an ideal set of training data for a machine learning system. The question was, could we use this stuff to create an object detection model to go through every newspaper, to throw open the treasure chest?

The answer, happily, was yes. Using the initial human-powered work of outlining images and captions as training data, they built an AI agent that could do so on its own. After the usual tweaking and optimizing, they set it loose on the full Chronicling America database of newspaper scans.

It ran for 19 days nonstop definitely the largest computing job Ive ever run, said Lee. But the results are remarkable: millions of images spanning three centuries (from 1789 to 1963) and organized with metadata pulled from their own captions. The team describes their work in a paper you can read here.

Assuming the captions are at all accurate, these images until recently only accessible by trudging through the archives date by date and document by document can be searched for by their contents, like any other corpus.

Looking for pictures of the president in 1870? No need to browse dozens of papers looking for potential hits and double-checking the contents in the caption just search Newspaper Navigator for president 1870. Or if you want editorial cartoons from the World War II era, you can just get all illustrations from a date range. (The team has already zipped up the photos into yearly packages and plans other collections.)

Here are a few examples of newspaper pages with the machine learning systems determinations overlaid on them (warning: plenty of hat ads and racism):

Thats fun for a few minutes for casual browsers, but the key thing is what it opens up for researchers and other sets of documents. The team is throwing a data jam today to celebrate the release of the data set and tools, during which they hope to both discover and enable new applications.

Hopefully it will be a great way to get people together to think of creative ways the data set can be used, said Lee. The idea Im really excited by from a machine learning perspective is trying to build out a user interface where people can build their own data set. Political cartoons or fashion ads, just let users define theyre interested in and train a classifier based on that.

A sample of what you might get if you asked for maps from the Civil War era.

In other words, Newspaper Navigators AI agent could be the parent for a whole brood of more specific ones that could be used to scan and digitize other collections. Thats actually the plan within the Library of Congress, where the digital collections team has been delighted by the possibilities brought up by Newspaper Navigator, and machine learning in general.

One of the things were interested in is how computation can expand the way were enabling search and discovery, said Kate Zwaard. Because we have OCR, you can find things it would have taken months or weeks to find. The Librarys book collection has all these beautiful plates and illustrations. But if you want to know like, what pictures are there of the Madonna and child, some are categorized, but others are inside books that arent catalogued.

That could change in a hurry with an image-and-caption AI systematically poring over them.

Newspaper Navigator, the code behind it and all the images and results from it are completely public domain, free to use or modify for any purpose. You can dive into the code at the projects GitHub.

Read more:
Millions of historic newspaper images get the machine learning treatment at the Library of Congress - TechCrunch

IonQ CEO Peter Chapman on how quantum computing will change the future of AI – VentureBeat

Businesses eager to embrace cutting-edge technology are exploring quantum computing, which depends on qubits to perform computations that would be much more difficult, or simply not feasible, on classical computers. The ultimate goals are quantum advantage, the inflection point when quantum computers begin to solve useful problems, and quantum supremacy, when a quantum computer can solve a problem that classical computers practically cannot. While those are a long way off (if they can even be achieved), the potential is massive. Applications include everything from cryptography and optimization to machine learning and materials science.

As quantum computing startup IonQ has described it, quantum computing is a marathon, not a sprint. We had the pleasure of interviewing IonQ CEO Peter Chapman last month to discuss a variety of topics. Among other questions, we asked Chapman about quantum computings future impact on AI and ML.

The conversation quickly turned to Strong AI, or Artificial General Intelligence (AGI), which does not yet exist. Strong AI is the idea that a machine could one day understand or learn any intellectual task that a human being can.

AI in the Strong AI sense, that I have more of an opinion just because I have more experience in that personally, Chapman told VentureBeat. And there was a really interesting paper that just recently came out talking about how to use a quantum computer to infer the meaning of words in NLP. And I do think that those kinds of things for Strong AI look quite promising. Its actually one of the reasons I joined IonQ. Its because I think that does have some sort of application.

In a follow-up email, Chapman expanded on his thoughts. For decades it was believed that the brains computational capacity lay in the neuron as a minimal unit, he wrote. Early efforts by many tried to find a solution using artificial neurons linked together in artificial neural networks with very limited success. This approach was fueled by the thought that the brain is an electrical computer, similar to a classical computer.

However, since then, I believe we now know, the brain is not an electrical computer, but an electrochemical one, he added. Sadly, todays computers do not have the processing power to be able to simulate the chemical interactions across discrete parts of the neuron, such as the dendrites, the axon, and the synapse. And even with Moores law, they wont next year or even after a million years.

Chapman then quoted Richard Feynman, who famously said Nature isnt classical, dammit, and if you want to make a simulation of nature, youd better make it quantum mechanical, and by golly its a wonderful problem, because it doesnt look so easy.

Similarly, its likely Strong AI isnt classical, its quantum mechanical as well, Chapman said.

One of IonQs competitors, D-Wave, argues that quantum computing and machine learning are extremely well matched. Chapman is still on the fence.

I havent spent enough time to really understand it, he admitted. There clearly is a lot of people who think that ML and quantum have an overlap. Certainly, if you think of 85% of all ML produces a decision tree. And the depth of that decision tree could easily be optimized with a quantum computer. Clearly theres lots of people that think that generation of the decision tree could be optimized with a quantum computer. Honestly, I dont know if thats the case or not. I think its still a little early for machine learning, but there clearly is so many people that are working on it. Its hard to imagine it doesnt have application.

Again, in an email later, Chapman followed up. ML has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Generally, Universal Quantum Computers excel at these kinds of problems.

Chapman listed three improvements in ML that quantum computing will likely allow:

Strong AI or ML, IonQ isnt particularly interested either. The company leaves that part to its customers and future partners.

Theres so much to be to be done in a quantum, Champan said. From education at one end all the way to the quantum computer itself. I think some of our competitors have taken on lots of the entire problem set. We at IonQ are just focused on producing the worlds best quantum computer for them. We think thats a large enough task for a little company like us to handle.

So, for the moment were kind of happy to let everyone else work on different problems, he added. We just think, producing the worlds best quantum computer is a large enough task. We just dont have extra bandwidth or resources to put into working on machine learning algorithms. And luckily, theres lots of other companies that think that theres applications there. Well partner with them in the sense that well provide the hardware that their algorithms will run on. But were not in the ML business per se.

The rest is here:
IonQ CEO Peter Chapman on how quantum computing will change the future of AI - VentureBeat

Announcing availability of Inf1 instances in Amazon SageMaker for high performance and cost-effective machine learning inference – idk.dev

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. Tens of thousands of customers, including Intuit, Voodoo, ADP, Cerner, Dow Jones, and Thompson Reuters, use Amazon SageMaker to remove the heavy lifting from each step of the ML process.

When it comes to deploying ML models for real-time prediction, Amazon SageMaker provides you with a large selection of AWS instance types, from small CPU instances to multi-GPU instances. This lets you find the right cost/performance ratio for your prediction infrastructure. Today we announce the availability of Inf1 instances in Amazon SageMaker to deliver high performance, low latency, and cost-effective inference.

The Amazon EC2 Inf1 instances were launched at AWS re:Invent 2019. Inf1 instances are powered by AWS Inferentia, a custom chip built from the ground up by AWS to accelerate machine learning inference workloads. When compared to G4 instances, Inf1 instances offer up to three times the inferencing throughput and up to 45% lower cost per inference.

Inf1 instances are available in multiple sizes, with 1, 4, or 16 AWS Inferentia chips. An AWS Inferentia chip contains four NeuronCores. Each implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses and saves I/O time in the process.

When several AWS Inferentia chips are available on an Inf1 instance, you can partition a model across them and store it entirely in cache memory. Alternatively, to serve multi-model predictions from a single Inf1 instance, you can partition the NeuronCores of an AWS Inferentia chip across several models.

To run machine learning models on Inf1 instances, you need to compile models to a hardware-optimized representation using the AWS Neuron SDK. Since the launch of Inf1 instances, AWS has released five versions of the AWS Neuron SDK that focused on performance improvements and new features, with plans to add more on a regular cadence. For example, image classification (ResNet-50) performance has improved by more than 2X, from 1100 to 2300 images/sec on a single AWS Inferentia chip. This performance improvement translates to 45% lower cost per inference as compared to G4 instances. Support for object detection models starting with Single Shot Detection (SSD) was also added, with Mask R-CNN coming soon.

Now let us show you how you can easily compile, load and run models on ml.Inf1 instances in Amazon SageMaker.

Compiling and deploying models for Inf1 instances in Amazon SageMaker is straightforward thanks to Amazon SageMaker Neo. The AWS Neuron SDK is integrated with Amazon SageMaker Neo to run your model optimally on Inf1 instances in Amazon SageMaker. You only need to complete the following steps:

In the following example use case, you train a simple TensorFlow image classifier on the MNIST dataset, like in this sample notebook on GitHub. The training code would look something like the following:

To compile the model for an Inf1 instance, you make a single API call and select ml_inf1 as the deployment target. See the following code:

Once the machine learning model has been compiled, you deploy the model on an Inf1 instance in Amazon SageMaker using the optimized estimator from Amazon SageMaker Neo. Under the hood, when creating the inference endpoint, Amazon SageMaker automatically selects a container with the Neo Deep Learning Runtime, a lightweight runtime that will load and invoke the optimized model for inference.

Thats it! After you deploy the model, you can invoke the endpoint and receive predictions in real time with low latency. You can find a full example on Github.

Inf1 instances in Amazon SageMaker are available in four sizes: ml.inf1.xlarge, ml.inf1.2xlarge, ml.inf1.6xlarge, and ml.inf1.24xlarge. Machine learning models developed using TensorFlow and MxNet frameworks can be compiled with Amazon SageMaker Neo to run optimally on Inf1 instances and deployed on Inf1 instances in Amazon SageMaker for real-time inference. You can start using Inf1 instances in Amazon SageMaker today in the US East (N. Virginia) and US West (Oregon) Regions.

Julien Simon is an Artificial Intelligence & Machine Learning Evangelist for EMEA, Julien focuses on helping developers and enterprises bring their ideas to life.

Read the original:
Announcing availability of Inf1 instances in Amazon SageMaker for high performance and cost-effective machine learning inference - idk.dev

Machine learning insight will lead to greener and cheaper mobile phone towers – University of Southampton

Home>>>

Published:27 April 2020

Off-grid renewable energy solutions will be introduced to mobile telecom towers in developing countries through a new collaboration involving researchers at the University of Southampton.

London-based Global Tower Solutions will work with machine learning experts in Electronics and Computer Science on the new project funded by the national SPRINT business support programme.

The partnership will develop a solution that is estimated to cost around half that of existing diesel generators, while also improving access to mobile communication services in targeted countries in Asia and sub-Saharan Africa.

Professor Gopal Ramchurn, Director of the Universitys Centre for Machine Intelligence, said: Mobile phone towers make a significant contribution to CO2 emissions and Global Tower Solutions is looking to decrease carbon emissions through a reduction in diesel powered mobile phone towers.

Through the SPRINT project, the University will apply machine learning techniques to high- and low-resolution datasets, drone imagery, census data, data from satellite images and other data available around settlements. This will help to define the business case for renewable energy for phone towers which can then be delivered to mobile phone operators to identify the most appropriate renewable energy sources and which regions need mobile communications the most.

Mobile communication has been shown to be a key factor in relieving poverty by providing access to information and financial services that drive trade, education, reduction in poverty and better health. The project will also lead to the reduced use of diesel and improved sustainability of small businesses that underpin developing economies.

Mark Eastwood, Chief Executive Officer of Global Tower Solutions, said: The renewable energy market has evolved over last 10-12 years and we set the company up 3-4 years ago with the aim of moving from diesel generation towards solar power and storage. We wanted to remove the diesel generation price point using sustainable, non-polluting storage solutions, particularly in emerging markets.

The SPRINT project will help us to explore the impact of renewable generating assets on both telco tower businesses and local communities, using business insights from datasets. Working with the University of Southampton, we can access expertise that can support us in high precision localised intelligence including valuable business insights, topological mapping, individual patterns of usage and movement of local population.

SPRINT (SPace Research and Innovation Network for Technology) helps businesses through the commercial exploitation of space data and technologies. The 4.8m programme provides unprecedented access to university space expertise and facilities. Southampton researchers are contributing to several SPRINT projects, including a recently announced collaboration with Smallspark Space Systems that is using AI to optimise aerostructure designs.

Read more here:
Machine learning insight will lead to greener and cheaper mobile phone towers - University of Southampton

Tesla releases impressive videos of cars avoiding running over pedestrians – Electrek

Tesla has released a few impressive videos of its Autopilot-powered emergency braking feature helping to avoid running over inattentive pedestrians.

What might be even more impressive is that the automaker says that it sees those events happen every day.

Theres a lot of talk about Tesla Autopilot, but one of the least reported aspects of Teslas semi-autonomous driver-assist system is that it powers a series of safety features that Tesla includes for free in all cars.

One of those features is Emergency Automatic Braking.

We saw the Autopilot-powered safety feature stop for pedestrians in impressive tests by Euro NCAP last year, but now we see it perform in real-world scenarios and avoiding potentially really dangerous situations.

Tesla has now released some examples of its system braking just in time to save pedestrians.

The new videos were released by Andrej Karpathy, Teslas head of AI and computer vision, in a new presentation at the Scaled Machine Learning Conference.

It was held at the end of February, but a video of the presentation was just released (starting when he shows the videos):

In the three video examples, you can see pedestrians emerging from the sides, out of the field of view, and Teslas vehicles braking just in time.

Tesla is able to capture and save those videos, thanks to its integrated TeslaCam dashcam feature.

Karpathy says:

This car might not even have been on the Autopilot, but we continuously monitor the environment around us. We saw that there was a person in front and we slammed on the brake.

The engineer added that Tesla is seeing a lot of those events being prevented by its system:

We see a lot of these tens to hundreds of these per day where we are actually avoiding a collision and not all of them are true positive, but a good fraction of them are.

In the rest of the presentation, Karpathy explains how Tesla is applying machine learning to its system in order to improve it enough to lead to a fully self-driving system.

I think its important to bring attention to these examples considering if an accident happens on Autopilot, it gathers so much attention from the media.

Lets see how many of them run with this story.

But I get it. People love crashes a lot more than a near-miss.

On another note, I really like how Karpathy communicates Teslas self-driving effort. His presentations are always super clear and informative, even for people who are not super experienced in machine learning.

In order for TeslaCam and Sentry Mode to work on a Tesla, you need a few accessories. We recommendJedas Model 3 USB hub(now also available for Model Y) to be able to still use the other plugs and hide your Sentry Mode drive. For the drive, Im now usinga Samsung portable SSD, which you need to format, but it gives you a ton of capacity, and it can be easily hidden in the Jeda hub.

What do you think? Let us know in the comment section below.

FTC: We use income earning auto affiliate links. More.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

The rest is here:
Tesla releases impressive videos of cars avoiding running over pedestrians - Electrek

The new decade and the rise of AutoML – ITProPortal

In 2019, The World Economic Forum forecasted that data analysts would be in high demand by 2020, and so far this year were seeing the prediction become a reality. The fact is, as much as companies would love to hire dozens or even hundreds of highly trained data scientists - even in todays challenging economic climate - the skill set is so highly sought after that it can be both difficult and costly to find and integrate the right people.

This is where the role of the data analyst comes in. Many companies have invested in automated machine learning (AutoML), which has enabled them to automate the process of applying machine learning to solve business challenges. What this means is that a wider variety of data analysts, who are not necessarily highly trained data scientists and who may have broader business skill sets, can access and use data more freely.

The move to AutoML is also being driven by the fact that its becoming increasingly recognised that organisations using AI cannot improve the business-led insight generated from that AI without improving the access to it. More people need access to data sources, the models being fed by data, and to data-driven analytics.

Data needs to be democratised. Were past a point where its acceptable for data access to be restricted only to highly trained data scientists well-versed in manipulating it. If we want to see the mass business benefits of data-driven analytics, data in all its various guises needs to make it outside of the confines of the data science lab and into the hands of a new generation of data analysts and business users.

In this article, we discuss how AutoML and new businesses operational models are influencing and accelerating the rise of the data analyst in this new decade.

The shift has meant that AutoML now has a broader scope to help democratise data science in general, meaning that its becoming easier for data analysts to get involved in the data-to-insights pipeline. While AutoML is not going to replace data scientists, it does mean that data analysts can be self-guided through feature creation, feature selection, model creation and comparison, and even operationalisation. What this means is that AutoML drives self-serve, augmented analytics, which can add efficiency to large swaths of the data pipeline.

At a very high level, AutoML is about automating the process of applying machine learning. Early on, AutoML was almost exclusively used for the automatic selection of the best-performing algorithms for a given task and for tuning the hyperparameters of said algorithms.

While this has been very helpful for data scientists, until recently, it hadnt improved data access or insights for data analysts or business users, who still may be reliant on data scientists to build machine learning based models in code. However, the emphasis on AutoML has shifted to making machine learning more accessible by automatically building models without the help of data scientists.

In the last two years of the previous decade, one of the biggest operational shifts that became apparent in technology-driven businesses was the continued convergence of data science and business intelligence. It was certainly a far cry from more traditional operational models, where organisations employed separate teams standard business intelligence (dashboards, reports, data visualisation, SQL) and data science (statistical models, R/Python.)

Their reasoning is logical: in bringing data science and business intelligence practices together, companies effectively form real-time, centralised access to what may have previously been disparate sources of data. This growing convergence and/or closer collaboration between data science and analytics teams has empowered more people to become data analysts, often referred to as citizen data scientists.

But dont let the term fool you: citizen data scientists come in many forms, and their data analysis skills are empowering business insight in very important ways. Their roles can include the Data Translator, who is bridging the technical expertise of data engineers and data scientists with the operational expertise of marketing, supply chain, manufacturing, risk, and other industry domains.

We are also seeing Data Explorers, who focus on identifying and connecting to new data sources, merging and preparing data, and building production-ready data pipelines. Data Modellers are responsible for building predictive models and generating either a product or a service from those models, and then implementing them.

Regardless of the nature of these new roles, there is a common theme: unlike the data scientists of the previous decade, analysts dont need to master all the intricacies of advanced machine learning and feature engineering. What they bring to the table is an intimate knowledge of the problems at hand and the business questions that need to be answered.

Heads of business units have traditionally had a more difficult time accessing data analytics, and have to specifically request reports and analysis from the data scientists on a case-by-case basis. The next evolution will be for machine learning itself to become more self-serviced. Deployment and maintenance of models will become more and more easy and automated, as will many analytic tasks.

By integrating self-service machine learning into their core business strategies, innovative companies are enabling data analysts to use real-time data at scale to make better and faster decisions throughout their organisations.

Its clear that AI maturity and its resulting data-driven insight cannot improve without expanding the breadth of people that have access to and work with data on a day-to-day basis. Its exciting to see companies prioritise a cultural shift toward a data-driven culture and the economic imperative of data insights. As the new decade progresses, were set to see this continue as one of the more powerful analytics trends that are already transforming business in 2020.

Alexis Fournier, Director of AI Strategy, Dataiku

Follow this link:
The new decade and the rise of AutoML - ITProPortal

DPC – Google virtual workshop discusses the use of Machine Learning and AI technologies in the news industry – mediaoffice.ae

DPC - Google virtual workshop discusses the use of Machine Learning and AI technologies in the news industryLocal and regional journalists participate in two-day session The Dubai Press Club (DPC), in collaboration with Google News Initiative, held a virtual workshop on the use of Machine Learning and other Artificial Intelligence-powered technologies for journalists and the media.

The two-day session served as an introduction on how AI and Machine Learning (ML) can be leveraged to enhance the newsgathering process. More than 200 journalists based in the UAE and abroad tuned into the session, which was delivered under the Google News Initiative, a programme that strives to support quality journalism globally and train journalists on the latest Google tools.

Machine Learning and AI technologies have been deployed in almost every industry in the world. A wide range of organisations have adopted these technologies to handle redundant tasks and processes which help minimise costs and increase productivity. In the media industry, AI and Machine Learning have raised the efficiency of various tasks ranging from fact-checking to analysis of vast amounts of data.Maitha Buhumaid, Director of the Dubai Press Club said: We are pleased to work with a reputed global technology company like Google to train journalists on how they can benefit from Machine Learning and AI technologies in their everyday work. The session outlined the fundamentals of machine learning and how newsrooms around the world are using these technologies to enhance their operations.

Buhumaid added that the virtual workshop is part of a series of virtual events being organised by DPC to continue supporting regional media development even in the current environment.

Samya Ayish, Teaching Fellow, MENA at Google News Lab who led the virtual workshop, said New technologies such as artificial intelligence are increasingly playing a role in facilitating news gathering, production and distribution. The workshop, held in partnership with Dubai Press Club, aims to help journalists enhance their understanding of these technologies so that they can use them more effectively and to help facilitate their work."

The session focused on four modules, including how journalists can use ML, how a machine can learn, bias in machine learning and the future of ML-powered journalism. Case studies helped journalists understand how ML has been used in the media industry, how ML bias occurs and how it can be avoided. Trainees were also given a step-by-step overview of the ML training process.

Continue reading here:
DPC - Google virtual workshop discusses the use of Machine Learning and AI technologies in the news industry - mediaoffice.ae

Informatica Acquires GreenBay Technologies to Advance AI and Machine Learning Capabilities – thepress.net

REDWOOD CITY, Calif., Aug. 18, 2020 /PRNewswire/ --Informatica, the enterprise cloud data management leader, today announced it has acquired GreenBay Technologies Inc. to accelerate its innovation in AI and machine learning data management technology. The acquisition will strengthen the core capabilities of Informatica's AI-powered CLAIRE engine across its Intelligent Data Platform, empowering businesses to more easily identify, access, and derive insights from organizational data to make informed business decisions.

"We continue to invest and innovate in order to empower enterprises in the shift to the next phase of their digital transformations," said Amit Walia, CEO of Informatica. "GreenBay Technologies is instrumental in delivering on our vision of Data 4.0, by strengthening our ability to deliver AI and machine learning in a cloud-first, cloud-native environment. This acquisition gives us a competitive advantage that will further enable our customers to unleash the power of data to increase productivity with enhanced intelligence and automation."

Core to the GreenBay acquisition are three distinct and advanced capabilities in entity matching, schema matching, and metadata knowledge graphs that will be integrated across Informatica's product portfolio. These technologies will accelerate Informatica's roadmap across Master Data Management, Data Integration, Data Catalog, Data Quality, Data Governance, and Data Privacy.

GreenBay Technologies' AI and machine learning capabilities will be embedded in the CLAIRE engine for a more complete and accurate, 360-degree view and understanding of business, with innovative matching techniques of master data of customers, products, suppliers, and other domains. With the acquisition, GreenBay Technologies will accelerate Informatica's vision for self-integrating systems that automatically infer and link target schemas to source data, enhance capabilities to infer data lineage and relationships, auto-generate and apply data quality rules based on concept schema matching, and increase accuracy of identifying sensitive data across the enterprise data landscape.

GreenBay Technologies was co-founded by Dr. AnHai Doan, University of Wisconsin Madison's Vilas Distinguished Achievement Professor, together with his Ph.D. students, Yash Govind and Derek Paulsen. Dr. Doan oversees multiple data management research projects at the University of Wisconsin's Department of Computer Science and is the co-author of "Principles of Data Integration," a leading textbook in the field, and was among the first to apply machine learning to data integration in 2001. Doan's pioneering work in the area of data integration has received multiple awards, including the prestigious ACM Doctoral Dissertation Award and the Alfred P. Sloan Research Fellowship. Dr. Doan and Informatica have a long history collaborating in the use of AI and machine learning in data management. In 2019, Informatica became the sole investor in GreenBay Technologies, which also has ties to the University of Wisconsin (UW) at Madison and the Wisconsin Alumni Research Foundation (WARF), one of the first and most successful technology transfer offices in the nation focused on advancing transformative discoveries to the marketplace.

"What started as a collaborative project with Informatica's R&D will now help thousands of Informatica customers better manage and utilize their data and solve complex problems at the pace of digital transformation," said Dr. Doan. "GreenBay Technologies will provide Informatica customers with AI and ML innovations for more complete 360 views of the business, self-integrating systems, and more automated data quality and governance tasks."

The GreenBay acquisition is an important part of Informatica's collaboration with academic and research institutions globally to further its vision of AI-powered data management including most recently in Europe with The ADAPT Research Center, a world leader in Natural Language Processing (NLP), in Dublin.

About InformaticaInformatica is the only proven Enterprise Cloud Data Management leader that accelerates data-driven digital transformation. Informatica enables companies to fuel innovation, become more agile, and realize new growth opportunities, resulting in intelligent market disruptions. Over the last 25 years, Informatica has helped more than 9,000 customers unleash the power of data. For more information, call +1 650-385-5000 (1-800-653-3871 in the U.S.), or visit http://www.informatica.com. Connect with Informatica on LinkedIn, Twitter, and Facebook.

Informatica and CLAIRE aretrademarks or registered trademarks of Informatica in the United States and in jurisdictions throughout the world. All other company and product names may be trade names or trademarks of their respective owners.

The information provided herein is subject to change without notice. In addition, the development, release, and timing of any product or functionality described today remain at the sole discretion of Informatica and should not be relied upon in making a purchasing decision, nor as a representation, warranty, or commitment to deliver specific products or functionality in the future.

See the original post here:
Informatica Acquires GreenBay Technologies to Advance AI and Machine Learning Capabilities - thepress.net

Machine Learning Is Cheaper But Worse Than Humans at Fund Analysis – Institutional Investor

Morningstar had a problem.

Or rather, its millions of users did: The star-rating system, which drives huge volumes of assets, is inherently backwards-looking. These make-or-break badges label how good (or bad) a fund has performed, not how it will perform.

Morningstars solution was analysts: humans who dig deep into the big and popular fund products, then assign them forward-looking ratings. For analyzing the lesser or niche products, Morningstar unleashed the algorithms.

But the humans still have an edge, academic researchers found except in productivity.

We find that the analyst report, which is usually 4 or 5 pages, provides very detailed information, and is better than a star rating, as it claims to be, said Si Cheng, an assistant finance professor at the Chinese University of Hong Kong, in an interview. She and her co-authors of a just-published study also found that the forward-looking algorithmic analysis doesnt do as much as an analyst rating. If we look at very similar funds rated by human and machine, theyre quite different even though you have two-forward looking ratings.

[II Deep Dive: AQRs Problem With Machine Learning: Cats Morph Into Dogs]

The most potent value in all of these Morningstar modes came from the tone of human-generated reports assessed using machine-driven textual analysis.

Tone is likely to come from soft information, such as what the analyst picks up from speaking to fund management and investors. That deeply human sense of enthusiasm or pessimism matters when it comes through in conflict with the actual rating, which the analysts and algos based on quantitative factors.

Most of Morningstars users are retail investors, but only professionals are tapping into this human-quant arbitrage, discovered Cheng and her Peking University co-authors Ruichang Lu and Xiajun Zhang.

We do find that only institutional investors are taking advantage of analysts reports, she told Institutional Investor Tuesday. They do withdraw from a fund if the fund gets a gold rating but a pessimistic tone.

Cheng, her coauthors, and other academic researchers working in the same vein highlight cost one major advantage of algorithmic analysis over the old-fashioned kind. After initial set up, they automatically generate all of the analysis at a frequency that a human cannot replicate, Cheng said.

As Anne Tucker, director of the legal analytics and innovation initiative at Georgia State University, cogently put it, machine learning is leveraging components of human judgement at scale. Its not a replacement; its a tool for increasing the scale and the speed. On the legal side, almost all of our data is locked in text: memos, regulatory filings, orders, court decisions, and the like.

Tucker has teamed up with GSU analytics professor Yusen Xia and associate law professor Susan Navarro Smelcer to gather the text of fund filings and turn machine-learning programs onto them, searching for patterns and indicators of future risk and performance. The project is underway, and detailed in a recent working paper.

We have complied all of the investment strategy and risk sections from 2010 onwards, and are using text mining, machine learning, a suite of other computational tools to understand the content, study compliance, and then to aggregate texts in order to model emerging risks, Tucker told II. If we listen to the most sophisticated investors collectively, what can we learn? If we would have had these tools before 2008, would we have been able to pick up tremors?

Maybe but they wouldnt have picked up the Covid-19 crisis, early findings suggest.

There were essentially no pandemic-related risk disclosures before this happened, Tucker said.

See the rest here:
Machine Learning Is Cheaper But Worse Than Humans at Fund Analysis - Institutional Investor

Effects of the Alice Preemption Test on Machine Learning Algorithms – IPWatchdog.com

According to the approach embraced by McRO and BASCOM, while machine learning algorithms bringing a slight improvement can pass the eligibility test, algorithms paving the way for a whole new technology can be excluded from the benefits of patent protection simply because there are no alternatives.

In the past decade or so, humanity has gone through drastic changes as Artificial intelligence (AI) technologies such as recommendation systems and voice assistants have seeped into every facet of our lives. Whereas the number of patent applications for AI inventions skyrocketed, almost a third of these applications are rejected by the U.S. Patent and Trademark Office (USPTO) and the majority of these rejections are due to the claimed invention being ineligible subject matter.

The inventive concept may be attributed to different components of machine learning technologies, such as using a new algorithm, feeding more data, or using a new hardware component. However, this article will exclusively focus on the inventions achieved by Machine Learning (M.L.) algorithms and the effect of the preemption test adopted by U.S. courts on the patent-eligibility of such algorithms.

Since the Alice decision, the U.S. courts have adopted different views related to the role of the preemption test in eligibility analysis. While some courts have ruled that lack of preemption of abstract ideas does not make an invention patent-eligible [Ariosa Diagnostics Inc. v. Sequenom Inc.], others have not referred to it at all in their patent eligibility analysis. [Enfish LLC v. Microsoft Corp., 822 F.3d 1327]

Contrary to those examples, recent cases from Federal Courts have used the preemption test as the primary guidance to decide patent eligibility.

In McRO, the Federal Circuit ruled that the algorithms in the patent application prevent pre-emption of all processes for achieving automated lip-synchronization of 3-D characters. The court based this conclusion on the evidence of availability of an alternative set of rules to achieve the automation process other than the patented method. It held that the patent was directed to a specific structure to automate the synchronization and did not preempt the use of all of the rules for this method given that different sets of rules to achieve the same automated synchronization could be implemented by others.

Similarly, The Court in BASCOM ruled that the claims were patent eligible because they recited a specific, discrete implementation of the abstract idea of filtering contentand they do not preempt all possible ways to implement the image-filtering technology.

The analysis of the McRO and BASCOM cases reveals two important principles for the preemption analysis:

Machine learning can be defined as a mechanism which searches for patterns and which feeds intelligence into a machine so that it can learn from its own experience without explicit programming. Although the common belief is that data is the most important component in machine learning technologies, machine learning algorithms are equally important to proper functioning of these technologies and their importance cannot be understated.

Therefore, inventive concepts enabled by new algorithms can be vital to the effective functioning of machine learning systemsenabling new capabilities, making systems faster or more energy efficient are examples of this. These inventions are likely to be the subject of patent applications. However, the preemption test adopted by courts in the above-mentioned cases may lead to certain types of machine learning algorithms being held ineligible subject matter. Below are some possible scenarios.

The first situation relates to new capabilities enabled by M.L. algorithms. When a new machine learning algorithm adds a new capability or enables the implementation of a process, such as image recognition, for the first time, preemption concerns will likely arise. If the patented algorithm is indispensable for the implementation of that technology, it may be held ineligible based on the McRO case. This is because there are no other alternative means to use this technology and others would be prevented from using this basic tool for further development.

For example, a M.L. algorithm which enabled the lane detection capability in driverless cars may be a standard/must-use algorithm in the implementation of driverless cars that the court may deem patent ineligible for having preemptive effects. This algorithm clearly equips the computer vision technology with a new capability, namely, the capability to detect boundaries of road lanes. Implementation of this new feature on driverless cars would not pass the Alice test because a car is a generic tool, like a computer, and even limiting it to a specific application may not be sufficient because it will preempt all uses in this field.

Should the guidance of McRO and BASCOM be followed, algorithms that add new capabilities and features may be excluded from patent protection simply because there are no other available alternatives to these algorithms to implement the new capabilities. These algorithms use may be so indispensable for the implementation of that technology that they are deemed to create preemptive effects.

Secondly, M.L. algorithms which are revolutionary may also face eligibility challenges.

The history of how deep neural networks have developed will be explained to demonstrate how highly-innovative algorithms may be stripped of patent protection because of the preemption test embraced by McRO and subsequent case law.

Deep Belief Networks (DBNs) is a type of Artificial Neural Networks (ANNs). The ANNs were trained with a back-propagation algorithm, which adjusts weights by propagating the outputerror backwardsthrough the network However, the problem with the ANNs was that as the depth was increased by adding more layers, the error vanished to zero and this severely affected the overall performance, resulting in less accuracy.

From the early 2000s, there has been a resurgence in the field of ANNs owing to two major developments: increased processing power and more efficient training algorithms which made trainingdeep architecturesfeasible. The ground-breaking algorithm which enabled the further development of ANNs in general and DBNs in particular was Hintons greedy training algorithm.

Thanks to this new algorithm, DBNs has been applicable to solve a variety of problems that were the roadblock before the use of new technologies, such as image processing,natural language processing, automatic speech recognition, andfeature extractionand reduction.

As can be seen, the Hiltons fast learning algorithm revolutionized the field of machine learning because it made the learning easier and, as a result, technologies such as image processing and speech recognition have gone mainstream.

If patented and challenged at court, Hiltons algorithm would likely be invalidated considering previous case law. In McRO, the court reasoned that the algorithm at issue should not be invalidated because the use of a set of rules within the algorithm is not a must and other methods can be developed and used. Hiltons algorithm will inevitably preempt some AI developers from engaging with further development of DBNs technologies because this algorithm is a base algorithm, which made the DBNs plausible to implement so that it may be considered as a must. Hiltons algorithm enabled the implementation of image recognition technologies and some may argue based on McRO and Enfish that Hiltons algorithm patent would be preempting because it is impossible to implement image recognition technologies without this algorithm.

Even if an algorithm is a must-use for a technology, there is no reason to exclude it from patent protection. Patent law inevitably forecloses certain areas from further development by granting exclusive rights through patents. All patents foreclose competitors to some extent as a natural consequence of exclusive rights.

As stated in the Mayo judgment, exclusive rights provided by patents can impede the flow of information that might permit, indeed spur, invention, by, for example, raising the price of using the patented ideas once created, requiring potential users to conduct costly and time-consuming searches of existing patents and pending patent applications, and requiring the negotiation of complex licensing arrangements.

The exclusive right granted by a patents is only one side of the implicit agreement between the society and the inventor. In exchange for the benefit of the exclusivity, inventors are required to disclose their invention to the public so this knowledge becomes public, available for use in further research and for making new inventions building upon the previous one.

If inventors turn to trade secrets to protect their inventions due to the hostile approach of patent law to algorithmic inventions, the knowledge base in this field will narrow, making it harder to build upon previous technology. This may lead to the slow-down and even possible death of innovation in this industry.

The fact that an algorithm is a must-use, should not lead to the conclusion that it cannot be patented. Patent rights may even be granted for processes which have primary and even sole utility in research. Literally, a microscope is a basic tool for scientific work, but surely no one would assert that a new type of microscope lay beyond the scope of the patent system. Even if such a microscope is used widely and it is indispensable, it can still be given patent protection.

According to the approach embraced by McRO and BASCOM, while M.L. algorithms bringing a slight improvement, such as a higher accuracy and higher speed, can pass the eligibility test, algorithms paving the way for a whole new technology can be excluded from the benefits of patent protection simply because there are no alternatives to implement that revolutionary technology.

Considering that the goal of most AI inventions is to equip computers with new capabilities or bring qualitative improvements to abilities such as to see or to hear or even to make informed judgments without being fed complete information, most AI inventions would have the higher likelihood of being held patent ineligible. Applying this preemption test to M.L. algorithms would put such M.L. algorithms outside of patent protection.

Thus, a M.L. algorithm which increases accuracy by 1% may be eligible, while a ground-breaking M.L. algorithm which is a must-use because it covers all uses in that field may be excluded from patent protection. This would result in rewarding slight improvements with a patent but disregarding highly innovative and ground-breaking M.L. algorithms. Such a consequence is undesirable for the patent system.

This also may result in deterring the AI industry from bringing innovation in fundamental areas. As an undesired consequence, innovation efforts may shift to small improvements instead of innovations solving more complex problems.

Image Source:Author: nils.ackermann.gmail.comImage ID:102390038

More:
Effects of the Alice Preemption Test on Machine Learning Algorithms - IPWatchdog.com