Artificial Intelligence Is Poised to Take More Than Unskilled Jobs – CMSWire

PHOTO:Shutterstock

Recently, Microsoft announced that it was terminating dozens of journalists and editorial workers at its Microsoft News and MSN organizations. Instead, the company said, it will rely on artificial intelligence to curate and edit news and content that is presented on MSN.com, inside Microsofts Edge browser, and in the companys Microsoft News apps.

Explaining the decision, Microsoft issued a statement to the Verge. The statement reads: Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic. The decision will result in the loss of 50 jobs in the US and a further 27 in the UK. Not a huge number, you might think, but Microsoft has been moving steadily in the direction of artificial intelligence and it would not be surprising to see other jobs in other areas disappear too.

However, the initial results have not been encouraging. Many of the affected workers are part of Microsofts SANE (search, ads, News, Edge) division, and are contracted as human editors to help pick stories.

The Guardian newspaper in the UK reported soon after that, the newly-instated robot editors of MSN.com selected a story about Little Mix singer Jade Thirlwalls experience with racism to appear on the homepage, but used a picture of Thirlwalls bandmate Leigh-Anne Pinnock to illustrate it. Thirwall herself pointed out the mistake in an Instagram post that read @MSN If youre going to copy and paste articles from other accurate media outlets, you might want to make sure youre using an image of the correct mixed race member of the group. She added, It offends me that you couldnt differentiate the two women of colour out of four members of a group DO BETTER!

Not an auspicious start, but something that is likely to happen again with other AI users and other AI managed content. It is also likely to impact other industries as AI slowly makes its way across the enterprise. More to the point, it is also likely to happened in industries where mistakes will have far reaching and potentially disastrous impacts. Until now it has been argued that the main jobs that will lose out to AI are unskilled, manual labour. However, the MSN decision indicates that this is not the case. So, what other skilled jobs are likely to be impacted?

Irwin Kirsch is director of the Princeton-based Center for Research on Human Capital and Education. He says that there are two issues to be considered when looking at the future of work and the use of AI.

Although many jobs will be replaced by AI, many others will need to evolve to become complementary to new technologies. Production workers an occupation hit hard by the impacts of technological change provides a powerful case in point. Firms regularly must hire new workers to replace incumbents who retire, leave the labor force for personal or health reasons, or change careers.

The Bureau of Labor Statistics projects that employment in production will fall by 406,000 workers between 2016 and 2026 due to pressure from automation and trade. Yet, over this same period, 1.97 million currently employed U.S. production workers will attain or exceed retirement age. In other words, U.S. firms will need to hire 1.52 million new production workers net of the projected decline of 406,000 production jobs simply to compensate for these retirements. And, over those years, there is little doubt that advancements will continue to shape and morph roles and levels of skills needed for success in the emerging jobs.

Another way of considering the issue of AI is not just to explore the jobs that will be lost, but also to understand the nature of the jobs that will remain. Can we successfully fill these emerging jobs?

Looking at degree attainment data, it might seem that we are set. But as with concerns about the impact of AI skilled jobs at risk, there is more to this story. The percentage of the U.S. population 25 and over who completed high school and college has risen dramatically over the past several decades. Yet, data from assessments of adult literacy and numeracy skills indicate that skill deficiencies are large, with half of Americans aged 16 to 34 lacking levels of literacy many experts deem critical for success in the labor market. While many acknowledge that AI is increasingly important and its impacts increasingly stark, we must also reckon with the fact that too few of America's young adults are well positioned for the emerging jobs of 21st-century economy, he said. To address this, we must pave the way for them to develop the skills they need today and into the future. So, what skilled jobs are likely to disappear, or be subject to substantial change? There are many, but when we asked a few people across different industries 6 were identified:

Roy Cohen is a career coach and author of The Wall Street Professionals Survival Guide. Over the next couple of years, he said technology will likely eliminate thousands of Wall Street jobs that are currently performed by skilled humans. FinTech, or the financial technology industry, is speeding to create new automated trading systems that outperform and outsmart traditional traders. These folks are likely to become obsolete unless they retool with strong programming skills. One example, IBMs Watson, is a supercomputer that consistently beats the market.

Pete Sosnowski is VP People and co-founder at Poland-based Zety, which publishes high-quality guides and articles for job seekers. He sees two skilled jobs changing drastically or disappearing altogether. They include:

Lawyers very often must dig through and analyze thousands of pages of case files. That is why AI and big data technology is developing to replace lower-level law firms employees in performing legal research. There are also algorithms that can already better predict the court verdicts something that we often hire the best lawyers and pay them top dollars for. On the bright side, artificial intelligence in law firms can mean a reduction in the costs of handling cases.

With a variety of AI tools available out there, it seems like the need for architects may decrease. We are now able, using AI technology and virtual-reality tools, to design our own houses, apartments, and offices. Every major furniture store will have a program available for its customers so that they can design their own interiors. The process is becoming more and more automated, and the human factor is less needed.

For Grant Aldrich, CEO of OnlineDegree, there are also two professions that will be hard hit by AI. Bookkeeping clerks will likely become redundant because most bookkeeping is automated. Tools like FreshBooks, QuickBooks, and Microsoft Office already offer bookkeeping procedures for a more affordable price than providing a persons salary and benefits. Bookkeeping jobs are already expecting to decline by 8% by 2024.

Proofreaders are likely to be replaced by AI, despite the skill needed to be a great editor. An editor needs to have a good command of the English language, but many websites and companies are already using grammar check software like Hemingway App and Grammarly. There are plenty of technologies that make it easy to self-check your writing.

Finally, Ian Kelly is VP of operations at Denver-based NuLeaf Naturals. He says it is likely that in the future general practitioners will also be replaced soon. Although this seems impossible, he says you need to consider what a doctor does. A doctor sees a patient, the patient tells them their symptoms and the GP gives a diagnosis. The issues lie in human error, where misdiagnosis isnt just a frivolous mistake its a fatal mistake. Its estimated that 40,000 to 80,000 people die annually from complications from misdiagnosis, while women and minorities are 20 to 30 percent more likely to be misdiagnosed.

See the rest here:
Artificial Intelligence Is Poised to Take More Than Unskilled Jobs - CMSWire

Artificial Intelligence: how man and machine are progressively working as one – Euronews

Futuris looks at how the relationship between man and machine in modern manufacturing is evolving through the adoption of Artificial Intelligence and automation.

No one knows the job better than the person who is doing it - that is the idea behind a package of novel ideas designed to make the most of factory workers' knowledge and experience.

In Seinajoki, Finland, metal company Prima Power is trialing two of the EUs Factory2Fit project solutions.

This 4m study explores new ways for people and machines to work together.

Dr Eija Kaasinen from technical research centre VTT says the aim is to put people at the centre and to enable them to participate in designing their own work environment.

Globally, automation and robotics are transforming manufacturing as part of the fourth industrial revolution. But this doesn't mean the human element is removed from work.

"Of course there are manual elements - but the work is changing towards knowledge work," explains Kaasinen. "It's more like working with the virtual counterparts of the physical things in the physical world."

A Pre-training Solution, for example, uses 3D models and cloud-based tutorials, while a so-called Knowledge Sharing solution makes the most of all the experience a worker gathers while running complex machinery, especially when something goes wrong, as Prima Power's Mariia Kreposna explains.

"So here the operator can open the additional dialogue box to get extra information about the situation. This is done by sharing the additional text, description, pictures or videos so the idea is in the future whenever the alarm with the same code happens, the operator will be able to learn not only the standard remedies but also other possible reasons and how to prevent this alarm happening in the future."

At the Elekmerk factory in Keuruu, Finland, workers have tested the Worker Feedback Dashboard - a biometric monitoring tool - like a fitbit - and an app.

It charts someone's work achievements and their well being - such as sleep and steps taken per day - and shows how the two can be linked.

"When we interviewed factory workers during the project we heard that often they had negative feedback when something is not going well," says VTT's Pivi Heikkil.. "So we wanted to develop an application that would also give you positive feedback of the fluency of your work and your accomplishments, so feedback of the things that are going well."

Ville Vuarola was one of five workers who wore the wristband for the three-month pilot scheme. He was happy to take part and says he was surprised at how a good night's sleep had a positive impact on his job.

"I was surprised to see how sleeping well influenced my work performance. Together with leisure activities, sleep was really important for my general performance at work," he says.

Of all the Factory2Fit solutions, this was the one that proved the most controversial with fears expressed over the possible misuse of workers' data.

But Pivi Heikkil says these concerns are unfounded.

"When we are developing these kind of solutions we always consider the ethics," she says, "and I want to stress and highlight that this should always be voluntary."

The data gathered is kept on a separate server, not in the factory system.

Researchers expect at least some of their Factory2Fit solutions to be commercially available by the end of next year.

Go here to read the rest:
Artificial Intelligence: how man and machine are progressively working as one - Euronews

Artificial intelligence enhances blurry faces into ‘super-resolution images’ – The Independent

Researchers have figured out a way to transform a few dozen pixels into a high resolution image of a face using artificial intelligence.

A team from Duke University in the US created an algorithm capable of "imagining" realistic-looking faces from blurry, unrecognisable pictures of people, with eight-times more effectiveness than previous methods.

"Never have super-resolution images been created at this resolution before with this much detail," said Duke computer scientist Cynthia Rudin, who led the research.

Sharing the full story, not just the headlines

The images generated by the AI do not resemble real people, instead they are faces that look plausibly real. It therefore cannot be used to identify people from low resolution images captured by security cameras.

The PULSE (Photo Upsampling via Latent Space Exploration) system developed by Dr Rudin and her team creates images with 64-times the resolution than the original blurred picture.

The PULSE algorithm is able to achieve such high levels of resolution by reverse engineering the image from high resolution images that look similar to the low resolution image when down scaled.

The images generated by enhancing the pixels do not represent real people (Duke University)

Through this process, facial features like eyelashes, teeth and wrinkles that were impossible to see in the low resolution image become recognisable and detailed.

"Instead of starting with the low resolution image and slowly adding detail, PULSE traverses the high resolution natural image manifold, searching for images that downscale to the original low resolution image," states a paper detailing the research.

The AI algorithm is able to enhance a few dozen pixels into a high-resolution picture of a face (Duke University)

"Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible."

The system could theoretically be used on low resolution images of almost anything, ranging from medicine and microscopy, to astronomy and satellite imagery.

This means noisy, poor-quality images of distant planets and solar systems could be imagined in high resolution.

The research will be presented at the 2020 Conference on Computer Vision and Pattern Recognition (CVPR) this week.

See original here:
Artificial intelligence enhances blurry faces into 'super-resolution images' - The Independent

Scality Invests to Advance AI and Machine Learning with Inria Research Institute – HPCwire

SAN FRANCISCO, Calif., June 16, 2020 Scality, provider of software solutions for global data orchestration and distributed file and object storage, announced an investment in Fondation Inria, the Foundation of the well-known French national research institute for digital sciences,Inria. Bringing both financial and collaboration backing to the institute, Scality will help support multi-disciplinary research and innovation initiatives. This includes mind-body health, precision agriculture, neurodegenerative diagnostics, privacy protection and more.

To be at the forefront of technological advancements and research has been a priority for Scality since our inception and we currently hold 10 patents. It only made sense for us to deepen our relationship with one of the most advanced research institutes on AI and algorithms in the world, said Jrme Lecat, Scality CEO and co-founder. We believe that technology and digital sciences can provide answers to the issues facing our fractured global society. Inria research teams work on incredible projects that actually change lives with personalized medicine, precision agriculture, sustainable development, smart cities and mobility, and security and privacy protection.

Scality has been close to Inria for many years and is involved with several collaborative research projects that are developing new concepts for distributed and scalable storage with Inria Distinguished Research Scholar,Marc ShapiroOne such project isRainbowFSwhich investigates an approach to distributed storage that ensures distributed consistency semantics tailored to applications in order to develop smarter and massively scalable systems.

We are delighted to be working with Scality. This collaboration is bringing two major players in French technology closer in order to further research and innovation on a global scale, said Jean-Baptiste Hennequin, Fondation Inria managing director. Our values align very closely to Scalitys: innovative research, social responsibility and open source. For example, our sheltered foundations are promoting the distribution of open source software for the durable development by bringing together their user communities within consortia, in recognition of how software embodies humanitys technical and scientific knowledge.

Read more about some of the exciting projects carried out by Inria research teams:

Originally posted here:
Scality Invests to Advance AI and Machine Learning with Inria Research Institute - HPCwire

After Effects and Premiere Pro gain more ‘magic’ machine-learning-based features – Digital Arts

ByNeil Bennett| on June 16, 2020

Roto Brush 2 (above) makes masking easier in After Effects, while Premiere Rush and Pro will automatically reframe and detect scenes in videos.

Adobe has announced new features coming to its video post-production apps, on the date when it was supposed to be holding its Adobe Max Europe event in Lisbon, which was cancelled due to COVID-19.

These aren't available yet unlike the new updates to Photoshop, Illustrator and InDesign but are destined in future releases. We would usually expect these to coincide with the IBC conference in Amsterdam in September or Adobe Max in October, though both of these are virtual events this year.

The new tools are based on Adobe's Sensei machine-learning technology. Premiere Pro will gain the ability to identify cuts in a video and create timelines with cuts or markers from them ideal if you've deleted a project and only have the final output, or are working with archive material.

A second-generation version of After Effects' Roto Brush enables you to automatically extract subjects from their background. You paint over the subject in a reference frame and the tech tracks the person or object through a scene to extract them.

Premiere Rush will be gaining Premiere Pro's Auto Reframe feature, which identify's key areas of video and frames around them when changing aspect ratio for example when creating a square version of video for Instagram or Facebook.

Also migrating to Rush from Pro will be an Effects panel, transitions and Pan and Zoom.

Note: We may earn a commission when you buy through links on our site, at no extra cost to you. This doesn't affect our editorial independence. Learn more.

Go here to read the rest:
After Effects and Premiere Pro gain more 'magic' machine-learning-based features - Digital Arts

AI: The complex solution to simplify health care – Brookings Institution

Health care languishes in data dissonance. A fundamental imbalance between collection and use persists across systems and geopolitical boundaries. Data collection has been an all-consuming effort with good intent but insufficient results in turning data into action. After a strong decade, the sentiment is that the data is inconsistent, messy, and untrustworthy. The most advanced health systems in the world remain confused by what theyve amassed: reams of data without a clear path toward impact. Artificial intelligence (AI) can see through the murk, clear away the noise, and find meaning in existing data beyond the capacity of any human(s) or other technology.

AI is a term for technologies or machines that have the capability to adapt and learn. This is the fundamental meaning of being data-driven, to be able to take measure of available data and perform an action or change ones mind. Machine Learning is at the heart of AIteaching machines to learn from data, rather than requiring hard-coded rules (as did machines of the past).

No domain is more deserving of meaningful AI than health care. Health care is arguably the most complex industry on earthoperating at the nexus of evolving science, business, politics, and mercurial human behavior. These influences push and pull in perpetual contradiction.

Health carespecifically psychologyis the mother of machine learning. In 1949, Dr. Donald Hebb created a model of brain cell interactions, or synaptic plasticity, that forms the ancestral architecture of the artificial neural networks that pervade AI today. Math to explain human behavior became mathematics to mimic and transcend human intellect. AI is now at the precipice of a return to the health care domain.

To achieve impact at scale, machine learning must be deployed in the most and least advanced health systems in the world. Any decent technology should remain resilient outside the walls of academia and the pristine data environments of tech giants. AI can learn from many dimensions of dataphotographs, natural language, tabular data, satellite imageryand can adapt, learning from the data thats available. The ability to adapt is what defines AI. AI at its best is designed to solve complex problemsnot wardrobe preferences. Now is the time to bring AI to health care.

COVID-19 is the greatest global crisis of our time: an immediate health challenge and a challenge of yet unknown duration on the economic and psychological well-being of our society. The lack of data-driven decisionmaking and the absence of adaptive and predictive technology have prolonged and exacerbated the toll of COVID-19. It will be the adoption of these technologies that helps us to rebuild health and society. AI has already forged new solutions for the COVID-19 response and the accelerated evolution of health care. Machine learning models from MIT for transmission rates have generated impressive precisionin some cases reducing error rates by 70 percent. Researchers at Mount Sanai in New York City have demonstrated the ability to reduce testing from two days to near instant by combining AI models with chest computed tomography (CT), clinical symptoms, exposure history, and laboratory testingreducing error of false negatives. AI modelsunlike test kitscan travel instantly to new users, are not limited in production, and do not require additional training and complementary equipment.

Adoption of AI must be done in concert with existing systems and solutions. Epidemiological models in concert with AI technology adapt and learn in real timeintegrating new data to help explain ancillary elements of health outcomes. However, collaboration between epidemiology and machine learning has been limited. The prominent epidemiological models are not integrating dynamic machine learning. Without machine learning, epidemiological models are updated weekly, losing precious time and rendering wildly inaccurate predictions that have been widely criticized. Human bias is writ large in these modelsvariable importance is determined by experts rather than learned and derived from the data.

AI models can derive implicit and explicit features from available data to increase the precision and adaptability of transmission predictions. Organizations like Metabiota have mapped thousands of pandemics to generate a model for risk. Existing electronic information systems (EIS) hold valuable historical health data when they are availableboth pandemic models and EIS are excellent sources for AI engines targeted at optimization of pandemic response at scale.

Optimizationin terms of tuning a health system to produce a maximum value (life expectancy, for example) or minimum value (cost of care) is the end goal of AI for health. By looking forward into the future and predicting demand, constraints, and behavior, AI can buy time. Time to prepare and ensure that resources are deployed to maximize the impact of every unit: financial, human, or commodity. Most models look backwardslike driving a car by only looking at the rearview mirroryet they are asked to make decisions for the future. Its Sisyphean to ask legacy analytics to prepare for tomorrow based on what is often a distant (months, weeks, or days at best) past of linear data inputs. Optimization through machine learning and AI technologies brings the prescience to data-driven decisions and actions required for impact. Machine-learning-optimized laboratory testing at MIT has accelerated discovery of new antibiotics previously considered unachievable due to the significant time and financial investment.

At the health system level, action is being accelerated through direct engagement with those at the front lines. Human-in- the-loop (HIL) machine learning (ML) is the process of receiving data-rich insights from people, analyzing them in real time, and sharing recommendations back. HIL ML is the science of teaching machines to learn directly from human input. In Mozambique and slated to expand to Sierra Leone, macro-eyes technology is learning directly from front-line health workersthe foremost experts on the conditions for care in the communities they serve. This becomes a virtuous cycle of high-value data, timely insights, and accelerated engagement at the point of care. Facility-level precision from HIL ML in Sierra Leone will complement AI optimization engines being deployed to probabilistically estimate the availability of essential resources at facilities across the country, account for new resources constraints, and recommend distribution of resources.

COVID-19 has highlighted the need for rapid connection between data analytics and the front lines of care. That connection still does not exist at scale. The result: Authorities must decipher a myriad of models estimating COVID-19-related transmissions and deaths in the near past and estimations for the future that dont build knowledge or data from the ground up. This fundamental disconnect has hindered health care for decadesthose who deliver the care have the least voice in how care is delivered. It can be resolved with minimal disruption using HIL ML to engage an educated and impassioned community of health workers.

AI in health has been successful but far too limited. The inability to trust what we dont fully understand, misrepresentation of AI expertise by early participants, and the financial fortitude of the global funding mechanisms remain barriers to adoption. AI canand willexponentially improve the delivery of care around the world. The data and the data infrastructure are ready and the time for bold investment is now. Investment must move away from pilots with insufficient horizon and commitment. AI at scaleas bold innovations of the pastwill only be possible with a committed corpus of financiers, policymakers, and implementing partners dedicating resources to AI experts solving problems at the foundations of health.

But we must proceed with caution. The world is replete with AI solutions and experts purporting to save the planet. Be criticalthere is very little real AI talent, and even fewer teams have the chops to deploy AI in the real world. The AI scientists of the future will not look like those of the recent past. The software engineers turned AI experts who brought AI to the digital world in Silicon Valley, and academics building models in protected vaults, will be usurped by adaptive, scrappy, problemsolving engineers using AI to make change in the communities they care about: deploying in the physical world meaningful solutions to complex problems. What is more meaningful than health?

Visit link:
AI: The complex solution to simplify health care - Brookings Institution

AI Machine Learning Market: Competitive and Regional Market Analysis till 2030 – Cole of Duty

Prophecy Market Insights AI Machine Learning market research report focuses on the market structure and various factors affecting the growth of the market. The research study encompasses an evaluation of the market, including growth rate, current scenario, and volume inflation prospects, based on DROT and Porters Five Forces analyses. The market study pitches light on the various factors that are projected to impact the overall market dynamics of the AI Machine Learning market over the forecast period (2019-2029).

The data and information required in the market report are taken from various sources such as websites, annual reports of the companies, journals, and others and were validated by the industry experts. The facts and data are represented in the AI Machine Learning report using diagrams, graphs, pie charts, and other clear representations to enhance the visual representation and easy understanding the facts mentioned in the report.

Get Sample Copy of This Report @ https://www.prophecymarketinsights.com/market_insight/Insight/request-sample/3249

The AI Machine Learning research study contains 100+ market data Tables, Pie Chat, Graphs & Figures spread through Pages and easy to understand detailed analysis. The predictions mentioned in the market report have been derived using proven research techniques, assumptions and methodologies. This AI Machine Learning market report states the overview, historical data along with size, share, growth, demand, and revenue of the global industry.

All the key players mentioned in the AI Machine Learning market report are elaborated thoroughly based on R&D developments, distribution channels, industrial penetration, manufacturing processes, and revenue. Also, the report examines, legal policies, and competitive analysis between the leading and emerging and upcoming market trends.

AI Machine LearningMarket Key Companies:

Segmentation Overview:

Global AI machine learning market by type:

Global AI machine learning market by application:

Global AI machine learning market by region:

Apart from key players analysis provoking business-related decisions that are usually backed by prevalent market conditions, we also do substantial analysis on market segmentation. The report provides an in-depth analysis of the AI Machine Learning market segments. It highlights the latest trending segment and major innovations in the market. In addition to this, it states the impact of these segments on the growth of the market.

Request [emailprotected] https://www.prophecymarketinsights.com/market_insight/Insight/request-discount/3249

Regional Overview:

The survey report includes a vast investigation of the geographical scene of the AI Machine Learning market, which is manifestly arranged into the localities. The report provides an analysis of regional market players operating in the specific market and outcomes related to the target market for more than 20 countries.

Australia, New Zealand, Rest of Asia-Pacific

Key Questions Answered in Report:

Stakeholders Benefit:

About us:

Prophecy Market Insights is specialized market research, analytics, marketing/business strategy, and solutions that offers strategic and tactical support to clients for making well-informed business decisions and to identify and achieve high-value opportunities in the target business area. We also help our clients to address business challenges and provide the best possible solutions to overcome them and transform their business.

Contact Us:

Mr. Alex (Sales Manager)

Prophecy Market Insights

Phone: +1 860 531 2701

Email: [emailprotected]

Link:
AI Machine Learning Market: Competitive and Regional Market Analysis till 2030 - Cole of Duty

CVPR 2020 Convenes Thousands from the Global AI, Machine Learning and Computer Vision Community in Virtual Event Beginning Sunday – thepress.net

LOS ALAMITOS, Calif., June 12, 2020 /PRNewswire/ --The Computer Vision and Pattern Recognition (CVPR) Conference, one of the largest events exploring artificial intelligence, machine learning, computer vision, deep learning, and more, will take place 14-19 June as a fully virtual event. Over the course of six days, the event will feature 45 sessions delivered by 1467 leading authors, academics, and experts to more than 6500 attendees, who have already registered for the event.

"The excitement, enthusiasm, and support for CVPR from the global community has never been more apparent," said Professor of Computer Science at Cornell University and Co-Chair of the CVPR 2020 Committee Ramin Zabih. "With large attendance, state of the art research, and insights delivered by some of the leading authorities in computer vision, AI, and machine learning, our first-ever fully virtual event is shaping up to be an exciting experience for everyone involved."

As a fully virtual event, attendees will have access to all CVPR program components, including fireside chats, workshops, tutorials, and oral and poster presentations via a robust, fully searchable, password-protected portal. Credentials to access the portal are provided to attendees shortly upon registration.

CVPR fireside chats, workshops, and tutorials will be conducted via live video with live Q&A between presenters and participants. Oral and poster presentations, which will be repeated, will include a pre-recorded video from the presenter(s), followed by a live Q&A session. Attendees will also be able to access presentations/papers and the pre-recorded videos at their convenience to help ensure maximum access given the diverse time zones in which conference participants live. Additionally, CVPR participants can leverage complementary video chat features and threaded question and answer commenting associated with each session and each sponsor to support further knowledge sharing and understanding. Multiple online networking events with video and text chat elements are also included.

"The CVPR Committee has gone to great lengths to deliver a first-in-class virtual conference experience that all attendees can enjoy," said IEEE Computer Society Executive Director Melissa Russell, co-sponsor of the event. "We are thrilled to be part of this endeavor and are excited to deliver and witness in the coming days the 'what's next' in AI, computer vision and machine learning."

Details on the full virtual CVPR 2020 schedule can be found on the conference website at http://cvpr2020.thecvf.com/program. All times are Pacific Daylight Time (Seattle Time).

Interested individuals can still register for CVPR at http://cvpr2020.thecvf.com/attend/registration. Accredited members of the media can register for the CVPR virtual conference by emailing media@computer.org.

About CVPR 2020CVPR is the premier annual computer vision and pattern recognition conference. With first-in-class technical content, a main program, tutorials, workshops, a leading-edge expo, and attended by more than 9,000 people annually, CVPR creates a one-of-a-kind opportunity for networking, recruiting, inspiration, and motivation. CVPR 2020, originally scheduled to take place 14-19 June 2020 at the Washington State Convention Center in Seattle Washington, will now be a fully virtual event. Authors and presenters will virtually deliver presentations and engage in live Q&A with attendees. For more information about CVPR 2020, the program, and how to participate virtually, visit http://cvpr2020.thecvf.com/.

About the Computer Vision FoundationThe Computer Vision Foundation is a non-profit organization whose purpose is to foster and support research on all aspects of computer vision. Together with the IEEE Computer Society, it co-sponsors the two largest computer vision conferences, CVPR and the International Conference on Computer Vision (ICCV).

About the IEEE Computer SocietyThe IEEE Computer Society is the world's home for computer science, engineering, and technology. A global leader in providing access to computer science research, analysis, and information, the IEEE Computer Society offers a comprehensive array of unmatched products, services, and opportunities for individuals at all stages of their professional career. Known as the premier organization that empowers the people who drive technology, the IEEE Computer Society offers international conferences, peer-reviewed publications, a unique digital library, and training programs. Visit http://www.computer.org for more information.

View original post here:
CVPR 2020 Convenes Thousands from the Global AI, Machine Learning and Computer Vision Community in Virtual Event Beginning Sunday - thepress.net

Unpack the use of AI in cybersecurity, plus pros and cons – TechTarget

AI is under the spotlight as industries worldwide begin to investigate how the technology will help them improve their operations.

AI is far from being new. As a field of scientific research, AI has been around since the 1950s. The financial industry has been using a form of AI -- dubbed expert systems -- for more than 30 years to trade stocks, make risk decisions and manage portfolios.

Each of these use cases exploits expert systems to process large amounts of data quickly at levels that far exceed the ability of humans to perform the same tasks. For instance, algorithmic stock trading systems make millions of trades per day with no human interaction.

Cybersecurity seeks to use AI and its close cousin, machine learning -- where algorithms that analyze data become better through experience -- in much the same way that the financial services industry has.

For cybersecurity professionals, that means using AI to take data feeds from potentially dozens of sources, analyze each of these inputs simultaneously in real time and then detect those behaviors that may indicate a security risk.

Beyond the use of AI and machine learning in cybersecurity risk identification, these technologies can be used to improve access control beyond the weak username and password systems in widespread use today by including support for multifactor, behavior-based, real-time access decisions. Other applications for AI include spam detection, phishing detection and malware detection.

Today's networked environments are extremely complex. Monitoring network performance is challenging enough; detecting unwanted behavior that may indicate a security threat is even more difficult.

Traditional incident response models are based on a three-pronged concept: protect, detect and respond. Cybersecurity experts have long known that of the three, detect is the weak link. Detection is hard to do and is often not done well.

In 2016, Gartner unveiled its own predict, prevent, detect and respond framework that CISOs could use to communicate a security strategy. Machine learning is particularly useful in predicting, preventing and detecting.

There are enormous amounts of data that must be analyzed to understand network behavior. The integration of machine learning and the use of AI in cybersecurity tools will not just illuminate security threats that previously may have gone undetected, but will help enterprises diagnose and respond to incursions more effectively.

AI-based security algorithms can identify malicious behavior patterns in the huge volumes of network traffic far better than people can. However, this technology can only identify the behavioral patterns the algorithms have been trained to identify. With machine learning, AI can go beyond the limits of algorithms and automatically improve its performance through learning or experience. The ability for AI -- and machine learning in particular -- to make decisions based upon data rather than rules promises to yield significant improvements in detection.

Let's examine how the integration of AI and machine learning might help improve the performance of intrusion detection and prevention systems (IDSes/IPSes). A typical IDS/IPS relies upon detection rules, known as signatures, to identify potential intrusions, policy violations and other issues.

The IDS/IPS looks for traffic that matches the installed signatures. But the IDS/IPS can identify malicious traffic only if a signature matching that malicious traffic is installed: no signature, no detection. This means the IDS/IPS cannot detect attacks whose signatures have yet to be developed. In addition, a signature-based IDS/IPS may also be easy to circumvent by making small changes to attacks so that they avoid matching a signature.

To close this gap, IDSes/IPSes have for years employed something called heuristic anomaly detection. This lets systems look for behavior that is out of the ordinary, as well as attempt to classify anomalous traffic as either benign, suspicious or unknown. When suspicious or unknown traffic is flagged, these systems generate an alert, which requires a human operator to determine whether the threat is malicious. But IDSes/IPSes are hobbled by the sheer volume of data to be analyzed, the number of alerts generated and especially the large percentage of false positives. As a result, signature-based IDSes/IPSes dominate.

The use of malicious AI, also known as adversarial AI, is growing.

One way to help the heuristic IDS/IPS become more efficient would be the introduction of machine learning-generated probability scores that determine which activity is benign and which is harmful.

The challenge, however, is that, of the billions of actions that occur on networks, relatively few of them are malicious. It is kind of a double-edged sword: There is too much data for humans to process manually and too little malicious activity for machine learning tools to learn effectively on their own.

To address this issue, security analysts train machine learning systems by manually labeling and classifying potential anomalies in a process called supervised learning. Once a machine learning cybersecurity system learns about an attack, it can search for other instances that reflect the same or similar behavior. This method may feel like it's nothing more than automating the discovery and creation of attack signatures, but the knowledge an machine learning system learns about attacks can be applied in a far more comprehensive approach than traditional signature detection systems can muster.

That's because machine learning systems can look for and identify behavior that is similar or related to what it has learned rather than rigidly focus on behavior that exactly matches a traditional signature.

The use of AI in cybersecurity offers the possibility of using technology to cut through the complexity of monitoring current networks, thus improving risk and threat detection. However, the use of AI in cybersecurity is a two-way street. The use of malicious AI, also known as adversarial AI, is growing. A malicious actor could potentially use AI to make a series of small changes to a network environment that, while individually insignificant, could change the overall behavior of the machine learning cybersecurity system once they are integrated over time.

This adversarial AI security threat is not limited to AI used in cybersecurity. It is a potential threat wherever AI is used, including in common industrial control systems and computer vision systems used in banking applications.

This means the AI models themselves are becoming a new attack surface component that must be secured. New security practices will have to be adopted. Some protection strategies will look like things we already know how to do, such as rate-limiting inputs and input validation.

Over time, AI adversarial training could be included as part of the supervised learning process. The uses and benefits of AI and machine learning in cybersecurity are real and necessary. There is too much data to process. It can take months to detect intrusions in today's large network data sets. AI can help detect malicious traffic, but it will take significant effort to develop and train an effective AI cybersecurity system. And, as is the case with all technology, AI can also be deployed maliciously. Mitigating the impact of malicious AI is also a reality in today's security environment.

More:
Unpack the use of AI in cybersecurity, plus pros and cons - TechTarget

Research Associate / Postdoc – Machine Learning for Computer Vision job with TECHNISCHE UNIVERSITAT DRESDEN (TU DRESDEN) | 210323 – Times Higher…

At TU Dresden, Faculty of Computer Science, Institute of Artificial Intelligence, the Chair of Machine Learning for Computer Vision offers a position as

Research Associate / Postdoc

Machine Learning for Computer Vision

(subject to personal qualification employees are remunerated according to salary group E 14 TV-L)

starting at the next possible date. The position is limited for three years with the option of an extension. The period of employment is governed by the Fixed Term Research Contracts Act (Wissenschaftszeitvertragsgesetz - WissZeitVG). The position aims at obtaining further academic qualification. Balancing family and career is an important issue. The post is basically suitable for candidates seeking part-time employment. Please note this in your application.

Tasks:

Requirements:

Applications from women are particularly welcome. The same applies to people with disabilities.

Please submit your comprehensive application including the usual documents (CV, degree certificates, transcript of records, etc.) by 31.07.2020 (stamped arrival date of the university central mail service applies) preferably via the TU Dresden SecureMail Portal https://securemail.tu-dresden.de/ by sending it as a single PDF document to mlcv@tu-dresden.de or to: TU Dresden, Fakultt Informatik, Institut fr Knstliche Intelligenz, Professur fr Maschinelles Lernen fr Computer Vision, Herrn Prof. Dr. rer. nat. Bjrn Andres, Helmholtzstr. 10, 01069 Dresden. Please submit copies only, as your application will not be returned to you. Expenses incurred in attending interviews cannot be reimbursed.

Reference to data protection: Your data protection rights, the purpose for which your data will be processed, as well as further information about data protection is available to you on the website: https: //tu-dresden.de/karriere/datenschutzhinweis

Please find the german version under: https://tu-dresden.de/stellenausschreibung/7713.

See the original post here:
Research Associate / Postdoc - Machine Learning for Computer Vision job with TECHNISCHE UNIVERSITAT DRESDEN (TU DRESDEN) | 210323 - Times Higher...