Pegasus The Humanitarian Costs of Insecure Code – Security Boulevard

PegasusThe Humanitarian Costs of InsecureCode

A look at the nature and effects of legal, advanced spyware on application security

Typically, stories about cyber attacks grab the readers attention by describing the damage inflicted on a company in large dollar amounts. While multimillion-dollar ransomware demands are shocking, they can be quickly forgotten. After all, these situations are eventually worked out, and its not as if anyones life is indanger.

Pegasus attacks are different.

Pegasus attacks on iPhone and Android devices do not cost businesses millions in revenue. They do not trigger multiple expensive lawsuits for privacy violations or result in sensitive data being used for blackmail. Pegasus measures its damage by its chilling effect on privacy, the incalculable costs of information suppression, and in some cases, humanlives.

Pegasus is an advanced spyware that exploits vulnerable mobile apps to gain a foothold on iPhone and Android devices. Once installed, Pegasus gives attackers a considerable amount of control over the device, including the abilityto:

Pegasus is the creation of the NSO Group, an Israeli firm that licenses it to governments to perform surveillance. NSO states its technology is intended to prevent and investigate terrorism and crime to save thousands of lives around the globe. However, Pegasus is a highly sophisticated tool, and like any tool its use is only as benevolent as the hand that wields it. The spyware allows governments to crack citizens mobile devices, track them, and observe their communications. Whether it is solely used to target criminals is up to their discretion.

On the iPhone, Pegasus uses a zero-click attack against the iOS iMessage app to infect the device. A zero-click attack is one that requires no cooperation or interaction from the victim to succeed. Typically, these attacks directly exploit known app vulnerabilities and use data verification loopholes to avoid automated detection and other security features. Zero-click attacks also take lengthy steps to remove or obfuscate all traces of their existence, making them extremely difficult for threat researchers todetect.

Pegasus is easier to deploy on Android and can move laterally to exploit secondary attack vectors if the primary method of infection fails. The Android version of Pegasus does not rely on a zero-click attack but, uses Framaroot to discover code exploits and root the device. Android, by design, does not keep the logs researchers use to identify a Pegasus infection. In fact, researchers must often use special tools to detect the presence of Pegasus onAndroid.

Both the Android and iPhone versions of Pegasus ultimately rely on exploiting vulnerable code. Yet, the spyware is so sophisticated that detecting its presence does little to reveal how it infiltrates a device. This is evident from the sheer length of time that iPhone users have struggled with Pegasus. Media outlets first reported the existence of the spyware in 2016. Apple released a quick fix for iMessage shortly afterward. Yet, the most recent iOS fix for Pegasus arrived on September 13, 2021five yearslater.

On July 18th Amnesty International and Forbidden Stories (a Paris-based non-profit), named 50,000 individuals as potential targets of Pegasus attacks. Among the names were journalists, activists, politicians and other people of interest. The list was initially leaked to Forbidden Stories, who shared it with the media. The Amnesty International Security Lab collected a small sample of phones from members of the list and tested them for Pegasus infections. The lab discovered Pegasus indicators on 37 of 67phones.

In response, NSO Group released a statement denying any wrongdoing and criticizing the methodology used by the lab. They reiterated their commitment to only serving law enforcement and intelligence agencies of vetted governments. NSO stated they do not operate Pegasus for clients or have access to internal client data. Therefore, they could not possibly possess or leak a list oftargets.

Governments named by Amnesty International for violating their citizens privacy likewise denied any wrongdoing. In India, several journalists, opposition leaders, and three state officials were identified as appearing on the list. Forensic tests on 22 of the smartphones belonging to suspected Indian targets revealed that 10 were attacked by Pegasus. The Indian Government responded by denying they use Pegasus to target non-criminals.

One aspect that sets Pegasus apart from other malware is its focus on individual targets. While ransomware and APT groups may conduct surveillance on their targets before launching an attack, they are seldom concerned with individuals. Malware campaigns may involve spear-phishing or whaling attacks against high-ranked individuals, but the goal is usually obtaining their account credentials or access. Pegasus is deployed to directly monitor the individual, not steal their account privileges.

Likewise, traditional malware attacks usually focus on stealing money, hijacking data, or disrupting the operations of an organization. They almost always inflict financial damage through blackmail, extortion, regulatory fines, information theft, or harming the brand name. The damage Pegasus inflicts is personal and applies directly to the individual. This means developers accustomed to weighing the financial risks of vulnerable code should also consider humanitarian risks aswell.

Pegasus also highlights the wide spectrum of adversaries devs are facing. The tactics techniques and procedures (TTPs) of APTs and black-hat hackers are well known and generally understood. Their attacks are unlawful, meaning compromised organizations can generally rely on the support of law enforcement. NSO is a well-funded private company and its customers are governments and law enforcement agencies. This makes it unlikely that anyone officially deploying Pegasus will be considered a criminal. When cracking security on an individuals mobile device is not a crime, the app developer becomes the sole line of defense against Pegasus-like attacks.

Pegasus, like 84% of all cyber attacks, relies on exploiting vulnerabilities in the application layer to succeed. This makes application security testing through methods like SAST, DAST, IAST, and SCA key to preventing these attacks. Simply put, depriving organizations like NSO of vulnerabilities to exploit is the best way to stop them. Once vulnerable code is released it can be extremely difficult to discover how it is exploited. If Apple, the worlds largest company, is still patching iMessage five years after the first Pegasus infection what chance do smaller businesses have?

Open-source code presents another problem. Many open-source libraries contain known vulnerabilities, yet 96% of proprietary applications contain open-source code. Simple steps like checking open-source code dependencies with tools like Intelligent SCA (I-SCA) can greatly improve application security by alerting development teams to these vulnerabilities. Likewise, static code analysis like next-generation SAST (NG-SAST) can provide developers with daily or weekly insight into vulnerabilities in custom and open source code. With these kinds of tools, it is possible to integrate security processes throughout the software development lifecycle to better protect user data in an application.

For more information on efficient ways to add security testing to the SDLC, visit Shiftleft.io.

PegasusThe Humanitarian Costs of Insecure Code was originally published in ShiftLeft Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

*** This is a Security Bloggers Network syndicated blog from ShiftLeft Blog - Medium authored by The ShiftLeft Team. Read the original post at: https://blog.shiftleft.io/pegasus-the-humanitarian-costs-of-insecure-code-6f5afe6f36a1?source=rss----86a4f941c7da---4

Read more here:

Pegasus The Humanitarian Costs of Insecure Code - Security Boulevard

Appsmith raises $8M to take on the internal corporate app market with open source code – TechCrunch

Appsmith, which provides open source software that helps companies quickly build internal applications, announced an $8 million Series A round of funding this morning.

Unlike some upstart tech companies that we have seen in the internal application market, Appsmith doesnt sport a no- or low-code approach. Instead, Appsmith targets traditional developers with its service, which provides user-interface components that can be connected to business data sources. Those data-infused interface modules can be combined to build calendars, dashboards and other apps.

Before its Series A round of funding, Appsmith had raised $2.5 million. Canaan led the funding event. Accel and Bessemer participated, among other investors.

Asked why the startup selected that particular lead investor, co-founder and CEO Abhishek Nayak told TechCrunch that his company had been in touch with Canaans Joydeep Bhattacharyya since its early days and that the investor had experience with internal apps at Microsoft.

Why did Appsmith decide to raise capital now? Per Nayak, rising usage of its service and a desire to build out its platform to support more use cases things like mobile were the impetus.

Appsmith doesnt offer a paid product today. But as it currently offers a hosted version of its open source code, it isnt hard to see where it could turn on monetization. Enterprise-specific features would be another obvious method of generating revenue in time.

There has been a trend of startups that build open source technologies raising capital in recent quarters. Appsmith fits neatly into the group.

But thewhy is more interesting. The company told TechCrunch that its co-founders (Nayak and Nikhil Nandagopal) want Appsmith tech to become part of their customers technology stack, and that open source code is the way to achieve the goal. The logic there is simple: Open source code is at once less at-risk to the vicissitudes of startup viability and also easy to dig into. Good luck getting similar visibility into proprietary code.

Today, Appsmiths open source project has over 100 external contributors.

The Appsmith team stressed to TechCrunch that feedback from the open source community is useful to making development decisions. And the startup said that by offering a version of its service for free through open source channels, it can provide a service to public-good companies like nonprofits, which might not eventually become paying customers.

Speaking of which, who are the companys future customers?

In the startups view, its open source software is a good fit for smaller companies and developers. Its paid products will fit more neatly into midsize and larger companies once they are rolled out.

Well have eyes on how Appsmith tackles monetization and customer segmentation, two areas of open source business model formation that we find fascinating. Not only because its an interesting academic question in the case of the startup itself, but also because we want to better understand how the next generation of open source upstarts decides how to make money. Their choices will set the standard for the next cohort of companies building code in an open manner.

Appsmith has lots of competition in the market, with each rival company taking a different tack to the internal application issue. In brief, companies of every shape and size need internal software, and building it is at once tedious, often thankless and unexciting. So, methods that can short-circuit the process of building internal tooling are in demand.

Stacker, for example, wants to help non-developers build apps from spreadsheets. Unqork wants to help enterprise customers build no-code internal apps. UiFlow as well. The list goes on.

Well check back in with Appsmith when it turns on paid products, an event that it anticipates will occur before the end of Q1 2022. For now, the startup is flush and working in a growing market. Lets see what it can get done with its new capital.

Original post:
Appsmith raises $8M to take on the internal corporate app market with open source code - TechCrunch

Assessing the intersection of open source and AI – VentureBeat

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

Open source technology has been a driving factor in many of the most innovative developments of the digital age, so it should come as no surprise that it has made its way into artificial intelligence as well.

But with trust in AIs impact on the world still uncertain, the idea that open source tools, libraries, and communities are creating AI projects in the usual wild west fashion is creating yet more unease among some observers.

Open source supporters, of course, reject these fears, arguing that there is just as little oversight into the corporate-dominated activities of closed platforms. In fact, open source can be more readily tracked and monitored because it is, well, open for all to see. And this leaves us with the same question that has bedeviled technology advances through the ages: Is it better to let these powerful tools grow and evolve as they will, or should we try to control them? And if so, how and to what extent?

If anything, says Analytics Insights Adilin Beatrice, open source has fueled the advance of AI by streamlining the development process. There is no shortage of free, open source platforms capable of implementing even complex types of AI like machine learning, and this serves to expand the scope of AI development in general and allow developers to make maximum use of available data. Tools like Weka, for instance, allow coders to quickly integrate data mining and other functions into their projects without having to write it all from scratch. Googles TensorFlow, meanwhile, is one of the most popular end-to-end machine learning platforms on the market.

And just as weve seen in other digital initiatives, like virtualization and the cloud, companies are starting to mix-and-match various open source solutions to create a broad range of intelligent applications. Neuron7.ai recently unveiled a new field service system capable of providing everything from self-help portals to traffic optimization tools. The system leverages multiple open AI engines, including TensorFlow, that allow it to not only ingest vast amounts of unstructured data from multiple sources, such as CRM and messaging systems, but also encapsulate the experiences of field techs and customers to improve accuracy and identify additional means of automation.

One would think that with open source technology playing such a significant role in the development of AI that it would be at the top of the agenda for policy-makers. But according to Alex Engler of the Brookings Institution, it is virtually off the radar. While the U.S. government has addressed open source with measures like the Federal Source Code Policy, more recent discussions on possible AI regulations mention it only in passing. In Europe, Engler says open source regulations are devoid of any clear link to AI policies and strategies, and the most recently proposed updates to these measures do not mention open source at all.

Engler adds that this lack of attention could produce two negative outcomes. First, it could result in AI initiatives failing to capitalize on the strengths that open source software brings to development. These include key capabilities like increasing the speed of development itself and reducing bias and other unwanted outcomes. Secondly, there is the potential that dominance in open source solutions could lead to dominance in AI. Open source tends to create default standards in the tech industry, and while top open source releases from Google, Facebook, and others are freely available, the vast majority of projects they support are created from within the company that developed the framework, giving them an advantage in the resulting program.

This, of course, leads us back to the same dilemma that has plagued emerging technologies from the beginning, says the IEEEs Ned Potter. Who should draw the roadmap for AI to ensure it has a positive impact on society? Tech companies? The government? Academia? Or should it simply be democratized and let the market sort it out? Open source supporters tend to favor a free hand, of course, with the idea that continual scrutiny by the community will organically push bad ideas to the bottom and elevate good ideas to the top. But this still does not guarantee a positive outcome, particularly as AI becomes accessible to the broader public.

In the end, of course, there are no guarantees. If weve learned anything from the past, mistakes are just as likely to come from private industry as from government regulators or individual operators. But there is a big difference between watching and regulating. At the very least, there should be mechanisms in place to track how open source technologies are influencing AI development so at least someone has the ability to give a heads up if things are heading in a wrong direction.

Here is the original post:
Assessing the intersection of open source and AI - VentureBeat

What Is Machine Learning, and How Does It Work? Here’s a Short Video Primer – Scientific American

Machine learning is the process by which computer programs grow from experience.

This isntscience fiction, where robots advance until they take over the world.

When we talk about machine learning, were mostly referring to extremely clever algorithms.

In 1950mathematician Alan Turing argued that its a waste of time to ask whether machines can think. Instead, he proposed a game: a player has two written conversations, one with another human and one with a machine. Based on the exchanges, the human has to decide which is which.

This imitation game would serve as a test for artificial intelligence. But how would we program machines to play it?

Turing suggested that we teach them, just like children. We could instruct them to follow a series of rules, while enabling them to make minor tweaks based on experience.

For computers, the learning process just looks a little different.

First, we need to feed them lots of data: anything from pictures of everyday objects to details of banking transactions.

Then we have to tell the computers what to do with all that information.

Programmers do this by writing lists of step-by-step instructions, or algorithms. Those algorithms help computers identify patterns in vast troves of data.

Based on the patterns they find, computers develop a kind of model of how that system works.

For instance, some programmers are using machine learning to develop medical software. First, they might feed a program hundreds of MRI scans that have already been categorized. Then, theyll have the computer build a model to categorize MRIs it hasnt seen before. In that way, that medical software could spot problems in patient scans or flag certain records for review.

Complex models like this often require many hidden computational steps. For structure, programmers organize all the processing decisions into layers. Thats where deep learning comes from.

These layers mimic the structure of the human brain, where neurons fire signals to other neurons. Thats why we also call them neural networks.

Neural networks are the foundation for services we use every day, like digital voice assistants and online translation tools. Over time, neural networks improve in their ability to listen and respond to the information we give them, which makes those services more and more accurate.

Machine learning isnt just something locked up in an academic lab though. Lots of machine learning algorithms are open-source and widely available. And theyre already being used for many things that influence our lives, in large and small ways.

People have used these open-source tools to do everything from train their pets to create experimental art to monitor wildfires.

Theyve also done some morally questionable things, like create deep fakesvideos manipulated with deep learning. And because the data algorithms that machines use are written by fallible human beings, they can contain biases.Algorithms can carry the biases of their makers into their models, exacerbating problems like racism and sexism.

But there is no stopping this technology. And people are finding more and more complicated applications for itsome of which will automate things we are accustomed to doing for ourselves--like using neural networks to help run power driverless cars. Some of these applications will require sophisticated algorithmic tools, given the complexity of the task.

And while that may be down the road, the systems still have a lot of learning to do.

View post:
What Is Machine Learning, and How Does It Work? Here's a Short Video Primer - Scientific American

Getting machine learning into production is hard the MCubed webcast is here for support DEVCLASS – DevClass

The MCubed webcast returns this week to tackle a whole other beast: Continuous Delivery for Machine Learning. Join us on October 7th at 11am BST (thats 12 oclock for you CEST peeps) to get into the nitty gritty of the operational side of ML.

If youve ever worked with an application that uses some form of machine learning, youll know that some component or other is always evolving: If it isnt the training data thats changing, youll surely come across a model that needs updating, and if all is well in those areas, theres a good chance a feature request is waiting for implementation so code modifications are due.

In regular software projects, we already know how to automatically take care of changes and make sure that we have a way of keeping our systems up to date without (too many) manual steps. The number of variables at play in ML however make it really tricky to come up with similar processes in that discipline, which is why it is often cited as one of the major roadblocks in getting machine learning-based applications into production.

For the second episode of our free MCubed webcast on October 7th, we therefore decided to sit down with you and have an in-depth look at how to tackle the operational side of ML. Joining in will be DevOps and data expert Danilo Sato, who helped quite a few organisations to set up a comprehensible continuous delivery (CD) workflow for their machine learning projects.

You might know Mr Sato from a popular article series on CD4ML, however his work reaches far beyond that. In his 2014 book DevOps in Practice: Reliable and Automated Software Delivery he already shared insights from working on all sorts of platform modernisation and data engineering projects, that also informed some of the good practices he recently investigated.

On the webcast, Sato will discuss how the principles of Continuous Delivery apply to machine learning applications, and walk you through the technical components necessary to implement a system that takes care of CD for your ML project. Hell walk you through the differences between MLOps and CD4ML, take a closer look at the peculiarities of version control and artifact repositories in ML projects, give you some tips on what to observe, and introduce you to the many different ways a model can be deployed.

And in case you have all of this figured out already, Danilo Sato will provide a look into the future of machine learning infrastructure as well as give you some food for thought on open challenges such as explainability and auditability.

The MCubed webcast on October 7th will start 11am BST (12pm CEST) with a roundup of the latest in machine learning-related software development news, but then its straight on to the talk.

Dont forget to let us know if you have any topics youd like to learn more about, or if you are interested in practical experience reports from specific industries we really want to make these webcasts worth your time, so every hint helps. Also, reach out if you want to share some tricks yourself, we always love to hear from you!

Register here to receive a quick reminder on the day were really looking forward to seeing you on Thursday!

Go here to read the rest:
Getting machine learning into production is hard the MCubed webcast is here for support DEVCLASS - DevClass

Immunis.AI Chosen by Amazon Web Services to Showcase its Cloud-Based Genomic Pipeline for Machine Learning – Business Wire

ROYAL OAK, Mich.--(BUSINESS WIRE)--Immunis.AI, Inc., an immunogenomics platform company developing noninvasive blood-based tests to optimize patient care, today announced that Amazon Web Services (AWS) will showcase the companys cloud-based genomic pipeline for machine learning. In collaboration with Mission Cloud Services, the platform will be highlighted in a virtual event, Behind the Innovation, hosted by AWS, today.

Immunis.AI engaged Mission, a partner with deep life science expertise, to design an AWS architecture to leverage Amazon S3 alongside a backend data pipeline using Amazon EC2 and Amazon EBS infrastructure. The challenge Immunis.AI faced was data ingestion and real-time analytics of its large immunotranscriptomic data sets and parallel processing of thousands of samples through its machine learning pipelines. Through the collaboration with Mission and AWS, the ingestion of data by Immunis.AI, which took two weeks to finish manually, can be completed within hours.

The virtual event, which will be held Tuesday, October 5th, from 9 to 10:30 a.m. Pacific Time (12 to 1:30 p.m. Eastern Time). For more information, or to register for the virtual event, click here.

Machine learning presents its own set of unique challenges and managing the large data sets is a major problem in the field of genomics. Working with Mission and AWS to design an architecture to streamline our data ingestion and analytics, has enabled us to drastically accelerate development of our immunogenomic tests to improve diagnosis and treatment of cancer patients, said Geoffrey Erickson, a founder and Senior VP of Corporate Development at Immunis.AI, who will be presenting at the event. We are pleased to have been chosen by AWS to highlight our architecture and our powerful partnership and the life changing outcomes it enables.

While it is still evolving, Mission provided Immunis.AI with a tested and proven blueprint for a viable research oriented genomic platform, all backed by AWS and ready to scale quickly and economically, said Jonathan LaCour, Chief Technology Officer & Sr. Vice President, Service Delivery. We are pleased to support Immunis.AIs important mission to develop tests that can improve the lives of cancer patients and proud of its Mission-built AWS infrastructure that is helping them.

Immunis.AI continues to leverage its successful blueprint across several clinical studies, as it develops and plans to commercialize its products. Mission will also continue to help Immunis.AI with data modernization, including data lake and analytics initiatives on AWS.

About Immunis.AI, Inc.

IMMUNIS.AI is a privately held immunogenomics company with a patented liquid biopsy platform that offers unique insights into disease biology and individualized assessment. The Intelligentia platform combines the power of the immune system, RNAseq technology and Machine Learning (ML) for the development of disease-specific signatures. This proprietary method leverages the immune systems surveillance apparatus to overcome the limitations of circulating tumor cells (CTCs) and cell free DNA (cfDNA). The platform improves detection of early-stage disease, at the point of immune-escape, when there is the greatest opportunity for cure. For more information, please visit our website: https://immunis.ai/

Read more:
Immunis.AI Chosen by Amazon Web Services to Showcase its Cloud-Based Genomic Pipeline for Machine Learning - Business Wire

Come And Do Research In Particle Physics With Machine Learning In Padova! – Science 2.0

I used some spare research funds to open a six-months internship to help my research group in Padova, and the call is open for applications at this site (the second in the list right now, the number is #23584). So here I wish to answer a few questions from potential applicants, namely:1) Can I apply?2) When is the call deadline?3) What is the salary?4) What is the purpose of the position? What can I expect to gain from it?

5) What will I be doing if I get selected?

Answers:1 - You can apply if you have completed a masters degree in a scientific discipline (physics, astronomy, mathematics, statistics, computer science) not earlier than one year ago. You are supposed to possess some programming skills, although your wish to learn is more important than your knowledge base.

2 - The deadline is October 16. The application process is simple, but you want to look into the electronic procedure early on to verify that you have the required documents.

3 - The salary is in line with the wage of Ph.D. students enrolled in the course in Padova. I do not know the net after taxation,but it is of the order of 1100 euros per month. This is not a lot of money, but it is enough to live by in Padova for a student. You won't get rich, but your focus should be to gain experience and titles for your future career!

4 - The purpose of the internship is to endow the recipient with skills in machine learning applied to fundamental physics research. Ideally, the recipient would be interested to apply for a Ph.D. at the University of Padova after finishing the internship, and the research work would be a very useful asset for his or her CV, along with the probable authorship of a publication in machine learning applications to particle physics; but the six months of work may be a good training also for graduate students who wish to move out of academia, to pursue a career in industry. The point is that what we will be working on together is a topic at the real bleeding edge of innovative applications of deep learning - something which will be invaluable in the future both in research and in industry. I will explain more what this is about below.

5 - If you get selected, you will join my research team, which is embedded in the MODE collaboration of which I am the leader. We want to use differentiable programming techniques (available through python libraries offered by packages such as Pytorch or TensorFlow) to create software that studies the end-to-end optimization of complex instruments used for particle physics research or for industrial applications such as muon tomography or proton therapy.

More in detail, we are currently tackling an "easy" application of deep-learning-powered end-to-end optimization which consists in finding the most advantageous layout of detection elements in a muography apparatus. Muon tomography consists in detecting the flow of cosmic-ray muons in and out of a unknown volume, of which we wish to determine the inner material distribution. This has applications to volcanology (where is the magma?), archaeology (study of hidden chambers in ancient buildings or pyramids), foundries (where is the melted material?), nuclear waste disposal (is there uranium in this box of scrap metal?), or detecting defects in pipelines or other industrial equipment.

To find the optimal layout we consider geometry, technology and cost of the detector as parameters, and we find the optimal solution by maximizing a utility function connected to how well the imaging is performed in a given time, and the cost of the apparatus and other constraints. So this is a relatively simple application of differentiable programming - you can pull it off if you model with continuous functions the various elements of the problem. If we manage to create a good software product we will share it freely, and then move on to some harder detector optimization problem (there is a long list of candidates). Actually, we are starting in parallel to study the optimization of calorimeters for particle detectors, which is a much, much more ambitious project which a fresh new Ph.D. student working with me, Federico Nardi, will investigate, again within the MODE collaboration.

So, if you are a bright master graduate and you want to deepen your skills in machine learning, please consider applying! If we select you, we will have loads of fun attacking these hard problems together!

If you need more information, please feel free to email me at dorigo (at) pd (dot) infn (dot) it. Thank you for your interest, and share this information with other potential applicants!

Read more:
Come And Do Research In Particle Physics With Machine Learning In Padova! - Science 2.0

Why automation, artificial intelligence and machine learning are becoming increasingly critical for SOC operations – Security Magazine

Why automation, artificial intelligence and machine learning are becoming increasingly critical for SOC operations This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Read the rest here:
Why automation, artificial intelligence and machine learning are becoming increasingly critical for SOC operations - Security Magazine

The AI & Machine Learning community stands ready to help in the climate crisis battle – ITProPortal

The worlds on fire.

For three days in August, 7 billion tons of rain fell on the peak of Greenland, which is just not the largest amount since records began 71 years back, but the first time we know that rain, not snow, fell on the countrys highest peak. Wildfires in Siberia broke another terrifying record for annual fire-related emissions of carbon dioxide, losing almost 19,300 square miles (500,000 square kilometers) of vegetation to the fires. And in the same month, the latest (sixth) scientific report from the Intergovernmental Panel on Climate Change sounded the emergency alarm yet again on the need for strong and sustained reductions in emissions of carbon dioxide and other greenhouse gases to try and save a common future for us all.

We might have known this for a while, but the rate of extreme weather events and the accumulation of more and more data about the global emergency is now inescapable. Its genuinely no exaggeration to say were in a fight for survival now. There are also more and more financial and business knock-on effects of all this that have already cost global economies uncounted billions, indeed more: Capgemini research shows that in the past twenty years, there were 7,348 major recorded disaster events claiming 1.23 million lives, affecting 4.2 billion people and resulting in approximately $3 trillion in global economic losses.

For sure, were going to need to do a lot more than just recycle our soda cans or eat a little less meat per week (though we need to keep on doing all that, too); scientists are now talking of the need for serious, large-scale geoengineering to try and save us. Climate change should be on every organizations agendabut the IT world, which (rightly) gets criticized for its less than stellar record on exorbitant electricity consumption, as part of general economic activity (which is rising again), has a particular responsibility to help.

Why? Because we burn a lot of kilowatts, but also because a lot of smart people work in our world, many of whom are deeply concerned about the threat of anthropogenic climate change. As a global citizen and IT professional, I feel this concern too: and I also work in the AI (Artificial Intelligence) world. So I asked myself, what can AI and AI professionals do to help here: and this is what I found.

At the top level, AI provides powerful tools to researchers, engineers, chemists, biologists, town planners and policymakersin short, everyone is trying to make a positive difference. All these people need the very best, most recent, most granular data to make their interventions or design remediation techniques, which will absolutely include carbon capture, Greener transport and new post-carbon industries and ways of living. But AI, or more specifically, machine learning, is also already lending a hand in a variety of practical ways and climate crisis use cases:

The potential of machine learning in this space has already been called out by the EU, which has stated, in a report on the potential of AI to achieve its ambitious Green Deal targets, noted that, The transformative potential of Artificial Intelligence to contribute to the achievement of the goals of a green transition have been increasingly and prominently highlighted [due to its ability to] accelerate the analysis of large amounts of data to increase our knowledge base, allowing us to better understand and tackle environmental challenges, especially around relevant information for environmental planning, decision- making, management and monitoring of the progress of environmental policies. And as Brussels also points out, AI-generated information could also help consumers and businesses to adapt towards more sustainable behavior, among other potential benefits.

That same study does point out the downside, or potential downside: AI could also negatively contribute via unforeseen consequences like automatically more efficient products might actually have the effect of causing users to give up control over their energy consumption and over-consume.

Yesbut we in the machine learning sector are very conscious of these issues, as are national governments and other legislators. But I am convinced that AI can, and should, play a central and positive role in helping put out the global fire, and could also be used by companies to help start incorporating the impact of climate change into their future planning processes.

In our own modest way, were eating our down dog food ourselves at the company I work for, H2O.ai. Our technology has been used for a number of positive climate projects and initiatives, including our work with a non-profit focused on wildlife conservation and research called Wildbook, which is blending structured wildlife research with AI, citizen science, and computer vision to speed population analysis and develop new insights to help fight the extinction of threatened species like the elephant.

Could we be doing more? Yes. And we need to. Could we all be doing more? Yes, and we need to. I believe that the climate emergency can be controlled, and a climate AI culture emerging between technologists, policymakers, domain experts, philosophers and the open-source community to optimize the design and deployment of helpful AI tools could really help.

Mark Bakker, Regional Lead Benelux, H2O.ai

Originally posted here:
The AI & Machine Learning community stands ready to help in the climate crisis battle - ITProPortal

What is data poisoning and how do we stop it? – TechRadar

The latest trend in businesses is the adoption of machine learning models to bolster AI systems. However, as this process gets more and more automated, this naturally puts them at greater risk of new emerging threats to the function and integrity of AI, including data poisoning.

About the author

Spiros Potamitis is Senior Data Scientist at Global Technology Practice at SAS.

Below, discover what data poisoning is, how it threatens business systems, and finally how to defeat it and win the fight against those who wish to manipulate data for their own gain

Before we discuss data poisoning, its worth revisiting how machine learning models work. We train these models to make predictions by feeding them with historical data. From these data, we already know the outcome that we would like to predict in the future and the characteristics that drive this outcome. These data teach the model to learn from the past. The model can then use what it has learned to predict the future. As a rule of thumb, when more data are available to train the model, its predictions will be more accurate and stable.

AI systems that include machine learning models are normally developed by experienced data scientists. They thoroughly examine and explore the data, remove outliers and run several sanity and validation checks before, during and after the model development process. This means that, as far as possible, the data used for training genuinely reflect the outcomes that the developers want to achieve.

However, what happens when this training process is automated? This does not very often occur during development, but there are many occasions when we want models to continuously learn from new operational data: on the job learning. At that stage, it would not be difficult for someone to develop misleading data that would directly feed into AI systems to make them produce faulty predictions.

Consider, for example, Amazon or Netflixs recommendation engines. Think how easy it is to change the recommendations you receive by buying something for someone else. Now consider that it is possible to set up bot-based accounts to rate programs or products millions of times. This will clearly change ratings and recommendations, and poison the recommendation engine.

This is known as data poisoning. It is particularly easy if those involved suspect that they are dealing with a self-learning system, like a recommendation engine. All they need to do is make their attack clever enough to pass the automated data checkswhich is not usually very hard.

The other issue with data poisoning is that it could be a long, slow process. Hackers can afford to take their time to change the data by feeding in a few results at a time. Indeed, this is often more effective, because it is harder to detect than a massive influx of data at a single point in timeand significantly harder to undo.

Fortunately, there are steps that organizations can take to prevent data poisoning. These include

1. Establish an end-to-end ModelOps process to monitor all aspects of model performance and data drifts, to closely inspect system function.

2. For automatic re-training of models, establish a business flow. This means that your model will have to go through a series of checks and validations by different people in the business before the updated version goes live.

3. Hire experienced data scientists and analysts. There is a growing tendency to assume that everything technical can be handled by software engineers, especially with the shortage of qualified and experienced data scientists. However, this is not the case. We need experts who really understand AI systems and machine learning algorithms, and who know what to look for when we are dealing with threats like data poisoning.

4. Use open with caution. Opensource data are very appealing because they provide access to more data to enrich existing sources. In principle, this should make it easier to develop more accurate models. However, these data are just that: open. This makes them an easy target for fraudsters and hackers. The recent attack on PyPI, which flooded it with spam packages, shows just how simple this can be.

It is vital that businesses follow the recommendations above so as to defend against the threat of data poisoning. However, there remains a crucial means of protection that often gets overlooked: human intervention. While businesses can automate their systems as much as they would like, it is paramount that they rely on the trained human eye to ensure effective oversight of the entire process. This prevents data poisoning from the offset, allowing organizations to innovate through insights, with their AI assistants beside them.

Continue reading here:
What is data poisoning and how do we stop it? - TechRadar