The FBI Forced A Suspect To Unlock Amazon’s Encrypted App Wickr With Their Face – Forbes

A warrant allowed FBI agents in Tennessee to force a suspect to unlock his encrypted Amazon messaging app, Wickr, with his face. It's an unprecedented move by the feds.

In November last year, an undercover agent with the FBI was inside a group on Amazon-owned messaging app Wickr, with a name referencing young girls. The group was devoted to sharing child sexual abuse material (CSAM) within the protection of the encrypted app, which is also used by the U.S. government, journalists and activists for private communications. Encryption makes it almost impossible for law enforcement to intercept messages sent over Wickr, but this agent had found a way to infiltrate the chat, where they could start piecing together who was sharing the material.

As part of the investigation into the members of this Wickr group, the FBI used a previously unreported search warrant method to force one member to unlock the encrypted messaging app using his face. The FBI has previously forced users to unlock an iPhone with Face ID, but this search warrant, obtained by Forbes, represents the first known public record of a U.S. law enforcement agency getting a judges permission to unlock an encrypted messaging app with someones biometrics.

According to the warrant, the FBI first tracked down the suspect by sending a request for information, via an unnamed foreign law enforcement partner, to the cloud storage provider hosting the illegal images. That gave them the Gmail address the FBI said belonged to Christopher Terry, a 53-year-old Knoxville, Tennessee resident, who had prior convictions for possession of child exploitation material. It also provided IP addresses used to create the links to the CSAM. From there, investigators asked Google and Comcast via administrative subpoenas (data requests that dont have the same level of legal requirements as search warrants) for more identifying information that helped them track down Terry and raid his home.

When they apprehended Terry, the FBI obtained his unlocked phone as well. But there was a problem: His Wickr account was locked with Apples Face ID facial recognition security. By the time it was made known to the FBI that facial recognition was needed to access the locked application Wickr, Terry had asked for an attorney, the FBI noted in its warrant. Therefore, the United States seeks this additional search warrant seeking Terrys biometric facial recognition to complete the search of Terrys Apple iPhone 11.

Most courts are going to find they can force you to use your face to unlock your phone because it's not compelling you to speak or incriminate yourself...

After the FBI successfully forced Terry to use his face to unlock his Wickr account, Terry was charged in a criminal complaint with distribution and possession of CSAM, but has not yet offered a plea. His lawyer did not respond to a request for comment at the time of publication.

Amazons Wickr hadnt provided comment at time of publication. The FBI, Google and Comcast did not immediately respond to a request for comment.

Forcing people to unlock encrypted messaging with their biometrics is unprecedented and controversial. Thats because of an illogical quirk in U.S. law: Courts across the U.S. have not allowed investigators to compel people to hand over a passcode for phones or apps, but they have allowed them to repeatedly unlock phones using biometrics. Thats despite the obvious fact that the result is the same.

Jerome Greco, a public defender in the Digital Forensics Unit of the Legal Aid Society in New York City, says this is because American law hasnt caught up with the technology. Passcodes, unlike biometric information, are legally considered testimonial, and citizens are not obliged to provide such testimony because the Fifth Amendment protects you from self-incrimination. But body parts are, by their nature, not as private as a persons thoughts, Greco notes.

Most courts are going to find they can force you to use your face to unlock your phone because it's not compelling you to speak or incriminate yourself... similar to fingerprints or DNA, Greco says.

But he believes there will soon be enough diverging case law for the Supreme Court to have to decide whether or not compelled facial recognition unlocks are lawful. We're trying to apply centuries-old constitutional law that no one could have envisioned would have been an issue when the laws were written, he says. I think the fight is coming.

There has been some pushback over such biometric unlocks from judges in some states. That includes two 2019 cases in California and Idaho, where the police wanted to force open phones inside properties relevant to the investigations. The judges in those cases declared biometric data was, in fact, testimonial, and law enforcement couldnt force the owners of those phones to use their faces to unlock them.

But last year, Forbes revealed the Justice Department was continuing to carry out such searches. It had also adopted new language in its warrants that said suspects have a legal right to decline to tell law enforcement whether its your face, your finger, or your eye that unlocks your phone. But even if you dont say what will unlock your phone, the DOJ said investigators could unlock your device by simply holding it up to your face or pressing your finger to it.

The search also comes after years of campaigning by the FBI to have tech giants provide more assistance in providing access to encrypted data. Since the 2015 San Bernardino terrorist attack, where the Justice Department demanded Apple open the shooters iPhone, that debate has intensified. The warrant, however, shows the government does have some techniques it can use to find criminals using the likes of Wickr and its encrypted data.

For now, Greco says the best way a person can protect themselves from such searches is to lock a device with a complex passcode rather than a face. Its possible to do the same with Wickr by disabling Touch ID or Face ID.

See more here:
The FBI Forced A Suspect To Unlock Amazon's Encrypted App Wickr With Their Face - Forbes

Why financial institutions cant bank on encryption – Global Banking And Finance Review

Simon Mullis, Chief Technology Officer at Venari Security

The past few years have seen a marked increase in geo-political tensions and emerging cyberattacks, keeping security teams on their toes. One of the most significant security threats however, is already hiding in plain sight remaining undetected within encrypted traffic. A major target for these attacks is the UKs Critical National Infrastructure (CNI), and defending against them should be an urgent priority for the finance industry.

The National Cyber Security Centres UK CNI comprise of 13 sectors the essential systems, processes, people and information needed for the countrys infrastructure. Importantly, the loss or compromise of each organisation could result in damaging and extensive impacts to the economy or to society. Although the first essential systems that come to mind may be power grids or water supplies, the finance sector also includes many organisations which provide essential services. Whether it be cash withdrawals and deposits, digital wire transfers, loan applications or investments, they are all relied on daily and must be treated in the same way. This results in a real responsibility for banks and financial institutions to ensure their systems are secure, with equally real consequences for failing to do so.

If attacks on CNI are only increasing, what does this mean for financial institutions, and more importantly, how can they ensure they are guarding against them? Lets consider the risks cyberattacks pose to CNI, as well as the actions the finance sector can take to protect its customers, their data, and financial assets.

The cybersecurity risks to CNI

One of the most recent high-profile CNI attacks that the finance industry must analyse and ensure is guarding against is the Colonial Pipeline ransomware incident, which took place in May 2021. The pipeline operator reported that a cyberattack had forced the company to temporarily shut down all business functions.

What is particularly significant about this attack is that it was simply an exposed username/password that allowed the attackers to gain access. Once in, their activity was end-to-end encrypted just like all the other traffic. Vast swathes of the US were affected with 45% of the East Coasts fuel operations halted as a result.

In this case, despite the organisation protecting its data with strong encryption standards, attackers were able to enter the network through a legitimate, encrypted path and thus rendered many of the counter measures ineffective. With the operators unaware of any anomalous activity on their networks, the intruders had all the time they needed to assess the system and get organised.

This presents a dilemma for CNI sectors, especially finance, where interactions and operations have to be encrypted.

Encryption is no longer enough

As happened in the Colonial Pipeline incident, the use of end-to-end encryption enabled attackers to conceal themselves in legitimate traffic. While critical to support data privacy and security in the event of breaches, end-to-end encryption renders many established means of detection ineffective.

Most defence methods still rely heavily on decryption and relatively rudimentary analysis to detect when traffic might be known-bad or deviating from expected patterns. The volume and speed of encrypted data now passing across networks means that it is impossible to detect everything with processes and techniques requiring this type of inspection.

And indeed, this is not a cutting-edge approach by cybercriminals. In the first three quarters of 2021 alone, threats over encrypted channels increased by 314% on the previous year. If organisations continue to use the same inadequate detection techniques to uncover malicious activity on their network, the rate of attacks using encrypted traffic will continue to grow at this rate or higher.

The security industry has long understood that breaches are not if, but when scenarios. And the current global climate, sparking a rise in nation-state attacks, undoubtedly increases the threat level further for CNI and especially for sensitive sectors such as finance.

Going beyond decryption to gain visibility

Financial institutions must strike a careful balance when it comes to security. On the one hand, it is vital they gain back visibility of their networks that end-to-end encryption might be at risk of concealing; on the other, its a necessity that they maintain a level of encryption in the first place.

Decryption is a too cumbersome and time-consuming approach now that our entire networks are encrypted both data-at-rest and in motion and organisations can only hope to keep up if they monitor for aberrant behaviour and malicious activity in their traffic without having to rely on decryption.

The solution? Security teams need to look towards using behavioural analytics to detect what is happening within encrypted traffic flows. A combination of machine learning and artificial intelligence, behavioural analytics can analyse encrypted traffic in near real-time without decryption. By accurately understanding the abnormalities between normal and anomalous behaviour, it significantly increases the rate and speed at which malicious activity concealed in encrypted traffic can be detected, whilst ensuring data remains private.

Security teams can then react immediately to contain the threats it identifies rather than responding after the fact, when banks might only realise that an attack has taken place after a customer has experienced a breach.

Not a threat, but a reality

As the geo-political landscape becomes more treacherous, and society, even more interconnected, critical infrastructure attacks will only increase, with financial services a major target.

Security teams can no longer bury their head in the sand, as these attacks may not be a looming threat, but an existing issue, hidden by the very encryption theyve relied on. Acting now is key, otherwise the risks posed by an attacker will only increase.

View post:
Why financial institutions cant bank on encryption - Global Banking And Finance Review

Types Of Data Security Compliance And Why They’re Important – Information Security Buzz

Every business has data that they need to protect against any breach or hacking attempt. The types of data todays businesses store are sensitive customer information, financial data, and confidential agreements or trade secrets. In order to protect this data, businesses are making sure that their internet-facing assets are secured and follow certain data security regulations and compliance. These regulations are known as data security compliance standards.

There are many different types of data security compliance standards, but each one is important in protecting your businesss sensitive information. This blog post will discuss why data security compliance is important and what are the different types of compliance standards for global organizations that rely on the internet.

What is Data Security?

Data security is a subset of cybersecurity. It is implemented to protect any electronic information (in rest or in transit) against any unauthorized access.

This can include physical security, which protects against theft of equipment and data, as well as logical security, which protects against hacking and other cyber attacks. Data security is important for businesses of all sizes, as it can help to prevent data breaches that could lead to identity theft, financial loss, and reputational damage.

What is a Security Compliance?

Security compliance is a set of guidelines that businesses must follow in order to ensure the safety of their data. These guidelines are put in place by some reputed governments and other standardized organizations. These organizations can include standards for how data is collected, stored, and transmitted, as well as requirements for employee training and security measures.

Who Needs Security Compliance?

Why is Compliance Important for businesses?

Compliance with data security regulations is important for businesses because it helps to protect their sensitive information from unauthorized access. By following the guidelines set forth in these compliance standards, businesses can help to prevent data breaches that could lead to identity theft, financial loss, and reputational damage. Additionally, compliance can also help businesses to avoid fines and other penalties that may be imposed if they fail to meet these standards. Read our guide on top 5 business benefits of cybersecurity compliance to know more.

What Are the Risks of Non-Compliance?

There are many risks associated with non-compliance, including:

Fines and penalties: Perhaps the most well-known risk of non-compliance is the possibility of being fined or penalized by the government or other regulatory bodies.

Loss of customers: Another risk of non-compliance is the loss of customers. This can happen if customers lose trust in your business due to a data breach or other security incident.

Damage to reputation: Non-compliance can also damage your businesss reputation, which can make it difficult to attract new customers and partners.

These are just some of the risks associated with non-compliance.

Different Types of Data Security Compliance

1. PCI-DSS

The Payment Card Industry Data Security Standard (PCI-DSS) is a set of guidelines for businesses that do financial transactions on their platform or accept credit card payments. These guidelines cover topics such as data storage, encryption, and access control. PCI-DSS compliance is important for businesses because it helps to protect customer credit card information from being stolen.

2. HIPAA

The Health Insurance Portability and Accountability Act is a set of guidelines for businesses that handle protected health information. These guidelines cover topics such as data storage, encryption, and access control. HIPAA compliance is important for businesses because it helps to protect patient privacy and prevent medical identity theft.

3. SOC

The AICPAs System and Organization Controls is a set of guidelines for businesses that want to demonstrate their commitment to data security. These guidelines cover topics such as data storage, encryption, and access control. SOC compliance is important for businesses because it helps to build trust with customers and partners.

4. ISO 27001

The International Organization for Standardizations ISO 27001 is a set of guidelines for businesses that want to implement information security controls to secure themselves. These guidelines cover topics such as data storage, encryption, and access control. ISO 27001 compliance is important for businesses to meet certain recognizable standards in the industry they belong to (for example, IT security, cloud security or SaaS platforms). This compliance makes sure the compiled entity is secured against any unauthorized access.

5. GDPR

The General Data Protection Regulation (GDPR) is a set of guidelines for businesses that collect, store or process personal data on their websites and ecommerce stores. GDPR is mandatory for all companies that have customers from European Union (EU) countries. GDPR guidelines cover topics such as data storage, encryption, and access control. GDPR compliance is important for businesses because it helps to protect the privacy of EU citizens.

Each of these compliance standards is important in its own way and can help to protect your business from different types of risks. By understanding the different types of compliance standards and what they entail, you can make sure that your business is taking the necessary steps to protect its sensitive information.

How Can My Business Comply with Data Security Standards?

Implementing the above steps can help your business to comply with the different types of data security compliance standards. By doing so, you can help to protect your business from the risks associated with non-compliance.

Conclusion

We see the news on data breaches happening every day and it is not stopping. By implementing compliance and security standards, businesses can improve their data security posture and protect their customers information against a data breach. To implement a strong cybersecurity infrastructure, compliance and security standards are a good place to start, but businesses should also consider using other methods such as encryption and penetration testing to further secure their data.

Read this article:
Types Of Data Security Compliance And Why They're Important - Information Security Buzz

What is Azure VPN, and how does it work? – TechRadar

Microsofts Azure VPN (opens in new tab) offers two kinds of products: Point-to-Site (P2S) and Site-to-Site (S2S). Site-to-Site VPN is a form of cloud VPN, while Point-to-Site is an example of remote VPN. For a refresher on the two product types, you can read our comparison piece,.

Azure VPN will encrypt your corporate network communications with military grade AES-256-bit encryption, regardless of which product you choose. This level of security ensures that your sensitive corporate data remains safe while in transit between networks, or to and from remote workers PCs.

This article will discuss the appropriate use cases for each Azure product, describe their unique features, and explain pricing and customer support options.

If you've ever worked from home in an office setting, you probably used Point-to-Site VPN. P2S VPN encrypts communication between remote workers' devices and your corporate server by creating a secure communication corridor called a tunnel.

Tunnels encrypt information at one end, then decrypt it at the destination. This allows remote workers to safely access business apps and sensitive customer information from home. P2S VPN tunnels are temporary, and can open or close as required, when employees log on and off from the corporate network.

Site-to-Site VPN works quite differently. It also creates a tunnel, but the tunnel is always active and optimized for large volumes of data. These permanent tunnels can send large volumes of encrypted information back and forth between two or more corporate networks. An example of two sites that need to communicate this way could be a head office in New York communicating with a satellite branch in LA, or two corporate head offices in Europe and Asia exchanging information.

Which of these products makes sense for you will depend on your business needs. A group of geographically remote research sites with a central database and only in-office staff may need a Site-to-Site VPN. In contrast, a small business with just one location but several remote workers may opt for Point-to-Site. You will need both if you run a large enterprise with remote staff and multiple branch offices.

When selecting a corporate VPN, there are many options for both Site-to-Site and Point-to-Site solutions. For instance, in addition to Azure, Perimeter 81 (opens in new tab) and Amazon AWS VPN (opens in new tab) offer both product types. You can read our Perimeter 81 Business VPN Alternative Review (opens in new tab) and What is AWS VPN (opens in new tab) articles for more information on these two providers.

Azure VPNs products have many advantages over the competition. Firstly, Azure doesnt charge upfront for the use of its VPN gateways. S2S and P2S VPNs use a VPN gateway to encrypt and decrypt communications. Some gateways can handle Site-to-Site connections, while others are designed for Point-to-Site. How many connections a gateway can handle and what type varies by provider.

Many VPN providers charge a flat fee per VPN gateway, and an additional per-hour cost based on usage. Azure charges are purely by the hour. Additionally, Azures gateways can handle P2S and S2S connections, offering even more flexibility.

Azures pricing is per hour and based on the size of the VPN gateway you need. Gateways can create Site-to-Site and Point-to-Site tunnels.

For P2S connections, remember that the number of tunnels can fluctuate throughout the day, as various devices connect and disconnect. Be sure to buy a big enough gateway to handle your traffic at peak time. Otherwise, employees may have trouble logging in during the busiest time of the day.

Consult the chart below for an overview of Azures pricing. Note that although prices are shown by the hour, Azures billing is monthly, billed on the day of the month that you activated your account.

As you can see, pricing varies considerably based on the number of connections you need, and the amount of data going through your system at any given moment. Pay attention to your bandwidth needs, the number of remote employees you have, and the number of Site-to-Site connections you have to establish.

Azures Basic customer support package, included for all customers, provides access to the self-service knowledge base and support ticket system. There are no guarantees for response time on support tickets on the Basic tier, and Basic support customers are last in line behind all paying support plan holders.

Limited free support is normal for cloud VPN providers, and Azure stands out by simply having a free support option that isnt entirely self-service. Paid plans start at $29/month for trial and non-production environments, and go up to $1000/month for one-hour critical case response time, plus a dedicated consultant.

Azure offers two powerful cloud-based VPN solutions for Site-to-Site and Point-to-Site VPN. Its scalable pricing, with no upfront fees for gateways, and 99% uptime on P2S systems, allows for an impressive degree of flexibility when building a P2S, S2S, or hybrid solution.

Azures customer support options are limited and expensive, but this is normal for a cloud VPN provider. If you need encrypted access for remote employees, or want to securely connect a set of remote networks together, Microsoft Azure VPN is a robust option.

To learn more about business VPNs, see our picks for best business VPN (opens in new tab), and read our choices for the best VPN service (opens in new tab) providers overall.

TechRadar created this content as part of a paid partnership with Perimeter 81. The contents of this article are entirely independent and solely reflect the editorial opinion of TechRadar.

Read more here:
What is Azure VPN, and how does it work? - TechRadar

What is Artificial Intelligence? Guide to AI | eWEEK – eWeek

By any measure, artificial intelligence (AI) has become big business.

According to Gartner, customers worldwide will spend $62.5 billion on AI software in 2022. And it notes that 48 percent of CIOs have either already deployed some sort of AI software or plan to do so within the next twelve months.

All that spending has attracted a huge crop of startups focused on AI-based products. CB Insights reported that AI funding hit $15.1 billion in the first quarter of 2022 alone. And that came right after a quarter that saw investors pour $17.1 billion into AI startups. Given that data drives AI, its no surprise that related fields like data analytics, machine learning and business intelligence are all seeing rapid growth.

But what exactly is artificial intelligence? And why has it become such an important and lucrative part of the technology industry?

Also see: Top AI Software

In some ways, artificial intelligence is the opposite of natural intelligence. If living creatures can be said to be born with natural intelligence, man-made machines can be said to possess artificial intelligence. So from a certain point of view, any thinking machine has artificial intelligence.

And in fact, one of the early pioneers of AI, John McCarthy, defined artificial intelligence as the science and engineering of making intelligent machines.

In practice, however, computer scientists use the term artificial intelligence to refer to machines doing the kind of thinking that humans have taken to a very high level.

Computers are very good at making calculations at taking inputs, manipulating them, and generating outputs as a result. But in the past they have not been capable of other types of work that humans excel at, such as understanding and generating language, identifying objects by sight, creating art, or learning from past experience.

But thats all changing.

Today, many computer systems have the ability to communicate with humans using ordinary speech. They can recognize faces and other objects. They use machine learning techniques, especially deep learning, in ways that allow them to learn from the past and make predictions about the future.

So how did we get here?

Also see: How AI is Altering Software Development with AI-Augmentation

Many people trace the history of artificial intelligence back to 1950, when Alan Turing published Computing Machinery and Intelligence. Turings essay began, I propose to consider the question, Can machines think?' It then laid out a scenario that came to be known as a Turing Test. Turing proposed that a computer could be considered intelligent if a person could not distinguish the machine from a human being.

In 1956, John McCarthy and Marvin Minsky hosted the first artificial intelligence conference, the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). It convinced computer scientists that artificial intelligence was an achievable goal, setting the foundation for several decades of further research. And early forays into AI technology developed bots that could play checkers and chess.

The 1960s saw the development of robots and several problem-solving programs. One notable highlight was the creation of ELIZA, a program that simulated psychotherapy and provided an early example of human-machine communication.

In the 1970s and 80s, AI development continued but at a slower pace. The field of robotics in particular saw significant advances, such as robots that could see and walk. And Mercedes-Benz introduced the first (extremely limited) autonomous vehicle. However, government funding for AI research decreased dramatically, leading to a period some refer to as the AI winter.

Interest in AI surged again in the 1990s. The Artificial Linguistic Internet Computer Entity (ALICE) chatbot demonstrated that natural language processing could lead to human-computer communication that felt far more natural than what had been possible with ELIZA. The decade also saw a surge in analytic techniques that would form the basis of later AI development, as well as the development of the first recurrent neural network architecture. This was also the decade when IBM rolled out its Deep Blue chess AI, the first to win against the current world champion.

The first decade of the 2000s saw rapid innovation in robotics. The first Roombas began vacuuming rugs, and robots launched by NASA explored Mars. Closer to home, Google was working on a driverless car.

The years since 2010 have been marked by unprecedented increases in AI technology. Both hardware and software developed to a point where object recognition, natural language processing, and voice assistants became possible. IBMs Watson won Jeopardy. Siri, Alexa, and Cortana came into being, and chatbots became a fixture of modern retail. Google DeepMinds AlphaGo beat human Go champions. And enterprises in all industries have begun deploying AI tools to help them analyze their data and become more successful.

Now AI is truly beginning to evolve past some of the narrow and limited types into more advanced implementations.

Also see:The History of Artificial Intelligence

Different groups of computer scientists have proposed different ways of classifying the types of AI. One popular classification uses three categories:

Another popular classification uses four different categories:

While these classifications are interesting from a theoretical standpoint, most organizations are far more interested in what they can do with AI. And that brings us to the aspect of AI that is generating a lot of revenue the AI use cases.

Also see: Three Ways to Get Started with AI

The possible AI use cases and applications for artificial intelligence are limitless. Some of todays most common AI use cases include the following:

Of course, these are just some of the more widely known use cases for AI. The technology is seeping into daily life in so many ways that we often arent fully aware of them.

Also see: Best Machine Learning Platforms

So where is the future of AI? Clearly it is reshaping consumer and business markets.

The technology that powers AI continues to progress at a steady rate. Future advances like quantum computing may eventually enable major new innovations, but for the near term, it seems likely that the technology itself will continue along a predictable path of constant improvement.

Whats less clear is how humans will adapt to AI. This question poses questions that loom large over human life in the decades ahead.

Many early AI implementations have run into major challenges. In some cases, the data used to train models has allowed bias to infect AI systems, rendering them unusable.

In many other cases, business have not seen the financial results they hoped for after deploying AI. The technology may be mature, but the business processes surrounding it are not.

The AI software market is picking up speed, but its long-term trajectory will depend on enterprises advancing their AI maturity, said Alys Woodward, senior research director at Gartner.

Successful AI business outcomes will depend on the careful selection of use cases, Woodware added. Use cases that deliver significant business value, yet can be scaled to reduce risk, are critical to demonstrate the impact of AI investment to business stakeholders.

Organizations are turning to approaches like AIOps to help them better manage their AI deployments. And they are increasingly looking for human-centered AI that harnesses artificial intelligence to augment rather than to replace human workers.

In a very real sense, the future of AI may be more about people than about machines.

Also see: The Future of Artificial Intelligence

Read the rest here:
What is Artificial Intelligence? Guide to AI | eWEEK - eWeek

Artificial Intelligence in Cyber Security: Benefits and Drawbacks. – TechGenix

AI for cybersecurity; its everywhere else!

You can use artificial intelligence (AI) to automate complex repetitive tasks much faster than a human. AI technology can sort complex, repetitive input logically. Thats why AI is used for facial recognition and self-driving cars. But this ability also paved the way for AI cybersecurity. This is especially helpful in assessing threats in complex organizations. When business structures are continually changing, admins cant identify weaknesses traditionally.

Additionally, businesses are becoming more complex in network structure. This means cybercriminals have more exploits to use against you. You can see this in highly automated manufacturing 3.0 businesses or integrated companies like the oil and gas industry. To this end, various security companies have developed AI cybersecurity tools to help protect businesses.

In this article, Ill delve into what AI is and how it applies to cybersecurity. Youll also learn the benefits and drawbacks of this promising technology. First, lets take a look at what AI is!

Artificial intelligence is a rationalization method using a statistically weighted matrix. This matrix is also called a neural net. You can think of this net as a decision matrix with nodes that have a weighted bias for each filtering process. The neural net will receive a database of precompiled data. This data will also contain answers to the underlying question the AI solves. This way, the AI will create a bias.

For example, lets consider a database containing different images. Lets say it has images of a persons face and other images of watermelons. Additionally, each image has a tag to check each item. As the AI learns whether it guessed correctly or not, the system increments node weightings. This process continues until the system reaches a predefined error percentage. This is often referred to as deep learning, which refers to the decision layers creating the depth.

Now, lets take a look at the steps used to process data.

We can condense the overall data workflow into the following process:

However, this process is slightly different with deep learning. The first step would include data from a precompiled database tagged with the correct response. Additionally, deep learning will repeat steps 1 through 4 to reach a predefined error tolerance value.

Lets take a look at this with an example of how AI data is processed.

Lets say a picture has reached an AI node. The node will filter the data into a usable format like 255 grayscale. Then, itll run a script to identify features, for example. If these features match others from a filter, the node can make a decision. For instance, itll say whether it found a face or a watermelon.

Then, the data goes to the next node down. This specific node could have a color filter to confirm the first decision. The process continues until the data reaches the last node. At that point, the AI will have made a final decision, ensuring whether it found a face or a watermelon.

Importantly, AI systems will always have a degree of error to them. None are infallible, and they never will be. But sometimes, the error percentages could be acceptable.

Now that you know how AI works lets take a look at AI cybersecurity solutions.

AI cybersecurity addresses the need to automate the assessment of threats in complex environments. Specifically, here are 2 use-cases for AI in AI cybersecurity:

Now you know the two main uses of AI in cybersecurity, lets take a look at its benefits and drawbacks!

As mentioned, AI has a lot of benefits. It runs repetitive tasks to identify anomalies or to classify data in particular in your business. That said, a few large drawbacks may offset its benefits. Here, well look at the drawbacks.

The first drawback is the AI cybersecurity solutions accuracy. This accuracy also depends on many factors. This includes the neural nets size and the decisions defined for filtering. It also depends on the number of iterations used to reach the predefined error percentage.

Imagine you have a decision tree with three layers. And each layer has several nodes for each decision route. Even though this is a fairly simple matrix, it needs a lot of calculations. Your systems finite resources will compromise your solutions intelligence.

An AI cybersecurity solution provider may stunt its solutions intelligence/accuracy to meet the target demographic. But sometimes, the problem isnt intelligence. Instead, its low latency and security vulnerabilities. When searching for an AI cybersecurity solution, consider how secure it is in your network.

Once trained, an AI statistical weighted matrix is often not re-trained in service. Youll find this is due to the lack of processing resources available in hardware. Sometimes, the system learns something that makes it worse, reducing effectiveness. Conversely, humans learn iteratively. This means they cause a lot of accidents. As a result, solution providers must ensure the software meets specification requirements during use.

Cybersecurity often requires updates to counter new exploits. To this end, it takes a lot of power to train your AI. Additionally, your AI cybersecurity vendor will need to update regularly to address cyber threats.

That said, the AI component of an AI cybersecurity solution is for classifying data and assessing anomalies in baseline data. As a result, it doesnt cause an issue for malware list updates. This means you can still use AI cybersecurity.

Now you know the benefits and drawbacks of AI cybersecurity, lets take a look at some uses for this technology!

As mentioned, highly automated businesses have the weakest cybersecurity. Generally, automated environments will overlap information technology (IT), operational technology (OT), and the Internet of things (IoT). This is to improve productivity, reduce the unit cost of a product, and undercut the competition.

But this also creates vulnerabilities. To this end, AI cybersecurity is great for finding potential exploits in these companies. Solutions either inform the administrator or automatically apply patches.

However, this may not be enough. Cybercriminals are currently attacking large, highly integrated companies. To do that, they exploit OT, which has no security. This OT was meant for wired networks to send commands to hardware like plant equipment. This means it never posed a security weakness. But today, attackers use OT to access the rest of a network or take plant equipment offline.

OT risk management tools are becoming popular for the reasons mentioned above. These systems effectively take a real-time clone of the production environment. Then, they run countless simulations to find exploits.

The AI part of the system generally finds exploits. In that case, an administrator provides a solution. OT risk management software continually runs as manufacturing plant arrangements change to meet orders, projects, or supply demands.

In this scenario, AI systems use known malware from antivirus lists to try and find an entry route into the system. The task requires automated repetitive functions of a complex system. And this makes it perfect for AI

So when should you implement AI cybersecurity? Lets find out.

As discussed above, businesses that use manufacturing and plant equipment should use AI cybersecurity. In most cases, youll also need to look for an OT risk management solution to reduce risks associated with OT.

You also can use AI cybersecurity if your business uses IoT and IT. This way, you can reduce the risk to the network from exploits. IoT devices generally undercut competitors, so you bypass the cost of adding adequate security measures.

Finally, you can use AI even if your company only uses IT. AI helps assess irregular traffic, so it protects your gateways. Additionally, you can leverage AIs data analytics. This way, youll know if someone is using your hardware for malicious purposes.

Now you know all you need to get started with AI cybersecurity, lets wrap things up!

Youll likely use AI wherever you need automated repetitive tasks. AI also helps make decisions on complex tasks. This is why many cybersecurity solution providers use AI. In fact, these providers tools help meet the challenge of highly complex systems that have very poor security.

You can always benefit from AI cybersecurity. It doesnt matter how integrated your business technology is. AI functionality is also great for classifying data using intelligent operations. This way, you can speed up your search for malware. AI cybersecurity is also beneficial for finding abnormal use of the network.

Do you have more questions about AI cybersecurity? Check out the FAQ and Resources sections below!

An AI neural net is a statical weighted matrix. This matrix helps process input data based on decisions made at nodes with a calibrated bias. To optimize this bias, data gets iteratively passed through the matrix. After that, the success rate is assessed, and each weighting value brings incremental changes. This process is called deep learning.

AI intelligence refers to the AIs error tolerance and decision layers. In theory, you could have as many layers as needed to make an intelligent AI. However, training it with data to reach a high error tolerance could be processor-intensive. This training may also take too long to produce. As a result, the solution becomes ineffective.

AI is trained using data to meet a predefined error tolerance level. For instance, a self-driving car lasts 1,000,000 miles by design. In this case, the cars service life determines the AI error tolerance. The AI accuracy must likely be 99.99% correct during decision-making to meet the service life

Operations technology (OT) risk assessment software assesses the security risks of plant equipment. Plants, integrated oil supply chains, and manufacturing 3.0 or above are also prime targets for attacks. AI cybersecurity can help assess threats using a clone of the production system. This helps check routes from OT systems to the rest of the system.

Yes, AI cybersecurity works in real-time. This helps detect weaknesses in your network or cyber threats. For example, you can find weaknesses by assessing traffic data through gateways and other hardware. You also can use AI as a centralized OT risk assessment software. This will let you assess the network structure for threats.

Learn about the different types of malware your AI cybersecurity solution will have to deal with.

Find out more about AI cybersecurity.

Discover more about AI and deep learning.

Understand how you can protect your organization by following GRC.

Learn how you can make your OPSEC better.

Read more from the original source:
Artificial Intelligence in Cyber Security: Benefits and Drawbacks. - TechGenix

A technique to improve both fairness and accuracy in artificial intelligence – MIT News

For workers who use machine-learning models to help them make decisions, knowing when to trust a models predictions is not always an easy task, especially since these models are often so complex that their inner workings remain a mystery.

Users sometimes employ a technique, known as selective regression, in which the model estimates its confidence level for each prediction and will reject predictions when its confidence is too low. Then a human can examine those cases, gather additional information, and make a decision about each one manually.

But while selective regression has been shown to improve the overall performance of a model, researchers at MIT and the MIT-IBM Watson AI Lab have discovered that the technique can have the opposite effect for underrepresented groups of people in a dataset. As the models confidence increases with selective regression, its chance of making the right prediction also increases, but this does not always happen for all subgroups.

For instance, a model suggesting loan approvals might make fewer errors on average, but it may actually make more wrong predictions for Black or female applicants. One reason this can occur is due to the fact that the models confidence measure is trained using overrepresented groups and may not be accurate for these underrepresented groups.

Once they had identified this problem, the MIT researchers developed two algorithms that can remedy the issue. Using real-world datasets, they show that the algorithms reduce performance disparities that had affected marginalized subgroups.

Ultimately, this is about being more intelligent about which samples you hand off to a human to deal with. Rather than just minimizing some broad error rate for the model, we want to make sure the error rate across groups is taken into account in a smart way, says senior MIT author Greg Wornell, the Sumitomo Professor in Engineering in the Department of Electrical Engineering and Computer Science (EECS) who leads the Signals, Information, and Algorithms Laboratory in the Research Laboratory of Electronics (RLE) and is a member of the MIT-IBM Watson AI Lab.

Joining Wornell on the paper are co-lead authors Abhin Shah, an EECS graduate student, and Yuheng Bu, a postdoc in RLE; as well as Joshua Ka-Wing Lee SM 17, ScD 21 and Subhro Das, Rameswar Panda, and Prasanna Sattigeri, research staff members at the MIT-IBM Watson AI Lab. The paper will be presented this month at the International Conference on Machine Learning.

To predict or not to predict

Regression is a technique that estimates the relationship between a dependent variable and independent variables. In machine learning, regression analysis is commonly used for prediction tasks, such as predicting the price of a home given its features (number of bedrooms, square footage, etc.) With selective regression, the machine-learning model can make one of two choices for each input it can make a prediction or abstain from a prediction if it doesnt have enough confidence in its decision.

When the model abstains, it reduces the fraction of samples it is making predictions on, which is known as coverage. By only making predictions on inputs that it is highly confident about, the overall performance of the model should improve. But this can also amplify biases that exist in a dataset, which occur when the model does not have sufficient data from certain subgroups. This can lead to errors or bad predictions for underrepresented individuals.

The MIT researchers aimed to ensure that, as the overall error rate for the model improves with selective regression, the performance for every subgroup also improves. They call this monotonic selective risk.

It was challenging to come up with the right notion of fairness for this particular problem. But by enforcing this criteria, monotonic selective risk, we can make sure the model performance is actually getting better across all subgroups when you reduce the coverage, says Shah.

Focus on fairness

The team developed two neural network algorithms that impose this fairness criteria to solve the problem.

One algorithm guarantees that the features the model uses to make predictions contain all information about the sensitive attributes in the dataset, such as race and sex, that is relevant to the target variable of interest. Sensitive attributes are features that may not be used for decisions, often due to laws or organizational policies. The second algorithm employs a calibration technique to ensure the model makes the same prediction for an input, regardless of whether any sensitive attributes are added to that input.

The researchers tested these algorithms by applying them to real-world datasets that could be used in high-stakes decision making. One, an insurance dataset, is used to predict total annual medical expenses charged to patients using demographic statistics; another, a crime dataset, is used to predict the number of violent crimes in communities using socioeconomic information. Both datasets contain sensitive attributes for individuals.

When they implemented their algorithms on top of a standard machine-learning method for selective regression, they were able to reduce disparities by achieving lower error rates for the minority subgroups in each dataset. Moreover, this was accomplished without significantly impacting the overall error rate.

We see that if we dont impose certain constraints, in cases where the model is really confident, it could actually be making more errors, which could be very costly in some applications, like health care. So if we reverse the trend and make it more intuitive, we will catch a lot of these errors. A major goal of this work is to avoid errors going silently undetected, Sattigeri says.

The researchers plan to apply their solutions to other applications, such as predicting house prices, student GPA, or loan interest rate, to see if the algorithms need to be calibrated for those tasks, says Shah. They also want to explore techniques that use less sensitive information during the model training process to avoid privacy issues.

And they hope to improve the confidence estimates in selective regression to prevent situations where the models confidence is low, but its prediction is correct. This could reduce the workload on humans and further streamline the decision-making process, Sattigeri says.

This research was funded, in part, by the MIT-IBM Watson AI Lab and its member companies Boston Scientific, Samsung, and Wells Fargo, and by the National Science Foundation.

Read more:
A technique to improve both fairness and accuracy in artificial intelligence - MIT News

What is artificial intelligence and how can it help your DevOps practices today? – TechRadar

By combining the roles of software development and IT operations, DevOps (opens in new tab) often encompasses so many tools and skills that too many of us get stuck working in a complex and time-consuming environment. Time that could be spent solving problems gets wasted on mundane tasks. By using artificial intelligence (AI), DevOps can automate complicated tasks that are easy for computers but hard (or boring) for humans. AI can also help streamline processes across the software development lifecycle (SDLC), allowing DevOps to focus on the work itself. In this guide, we cover how AI can be used throughout the DevOps cycle to improve productivity and security.

AI is the theory and method of creating computer systems that can automatically perform tasks normally requiring human, or greater, levels of intelligence. Computers rely on complex algorithms to perform these tasks, sometimes through explicit rules being provided to them, but more commonly through what is known as machine learning (ML).

ML is a subset of AI, where the system uses statistical methods to learn without explicit directions from a human operator. This requires sets of training data to teach the system desired outcomes. From there, ML can infer things about new data.

You may be using AI in your DevOps practices alreadyfor example, tools that make automatic code suggestions when writing software. The range and scope of DevOps tools available are growing, and AI is predicted to be a big part of automation in the coming year (opens in new tab).

Using automation to manage processes is a vital part of the DevOps approach. However, without AI, automation only executes actions based on explicit instructions provided by a person. AI uses a broader ruleset and a capacity to learn to improve performance over time. This allows AIand most commonly, MLto automatically perform complex tasks and eliminate the need for human intervention. These tasks include:

In each of these cases, AI is used to perform actions automatically that can reduce the workload of DevOps. This is great in theory, but how does it shape up in practice?

The most mature AI uses in DevOps are in applications that help programmers write code more effectively, those which manage monitoring and alerting, and those concerned with cybersecurity.

GitHub Copilot (opens in new tab) and Amazon CodeWhisperer (opens in new tab) are ML-powered tools that make relevant code suggestions to speed up programming. Both GitHub and Amazon integrate additional tools for testing within their environments.

Not every notification is important, and PagerDuty (opens in new tab) is an incident response platform that uses ML to minimize interruptions by improving the signal-to-noise ratio of important events to routine ones. As a basic example, instead of receiving alerts each time a service has successfully shut down and restarted, you might only receive an email if the service fails to restart. PagerDuty claims to provide up to 98% alert reduction.

Both Fortinet (opens in new tab) and Perimeter 81 (opens in new tab) provide high-performance network security tools that leverage AI. Fortinet provides resources for DevOps professionals, including GitHub repositories (opens in new tab) of tools and scripts to make setup and management of the software easier.

For those managing larger numbers of microservices or containers, especially in a multi-cloud or hybrid-cloud environment, Dynatrace (opens in new tab) uses AI to map, simplify, and manage the DevOps processes and delivery pipeline.

If youre interested in adding AI to your DevOps workflow, you probably already recognize many of the best DevOps tools (opens in new tab) available. Many of the tools that you already use are either adopting AI, or some, such as Selenium (opens in new tab), have additional software (opens in new tab) or plugins (opens in new tab) that allow AI to be integrated into them. Keeping on top of developments with the software you already use, and searching for tools that integrate with them, is a great way to get started with AI.

The DevOps culture has become an integral part of software development precisely because it allows projects to scale easily: spinning up 1,000 web servers is now as simple as creating one. Artificial intelligence takes the process one step further, allowing ever more complicated tasks to be left under the control of computer systems that learn and improve.

By using tools such as GitHub Copilot, Fortinet, and PagerDuty, DevOps professionals can harness the power of AI to produce a more efficient and secure SDLC. While there are many myths around DevOps (opens in new tab) trends, there is no doubt AI will continue to transform DevOps practices over the next couple of years.

TechRadar created this content as part of a paid partnership with PagerDuty. The contents of this article are entirely independent and solely reflect the editorial opinion of TechRadar.

Read more from the original source:
What is artificial intelligence and how can it help your DevOps practices today? - TechRadar

The ADF could be doing much more with artificial intelligence | The Strategist – The Strategist

Artificial intelligence is a general-purpose technology that is steadily becoming pervasive across global society. AI is now beginning to interest the worlds defence forces, but the military comes late to the game. Given this, defence forces globally are fundamentally uncertain about AIsplace in warfighting. Accordingly, theres considerable experimentation in defence AI underway worldwide.

This process is being explored in a new series sponsored by the Defense AI Observatory at the Helmut Schmidt University/University of the Federal Armed Forces in Germany. Unlike other defence AI studies, the series is not focusing solely on technology but instead is looking more broadly across what the Australian Defence Force terms the fundamental inputs to capability. The first study examines Australian defence AI, and another 17 country studies have already been commissioned.

The ADF conceives of AI as mainly being used in humanmachine teams to improve efficiency, increase combat power and achieve decision superiority, while lowering the risk to personnel. For a middle power, Australia is following a fairly active AI development program with a well-defined innovation pathway and numerous experimentation projects underway.

There is also a reasonable level of force structure ambition. The latest major equipment acquisition plan, covering the next 10 to 20 years, sets out six defence AI-relevant projects, one navy, one army, three air force and one in the information and cyber domain. Even in this decade, the AI-related projects are quite substantial; they include teaming air vehicles (with an estimated cost of $9.1 billion), an integrated undersea surveillance system ($6.2 billion), a joint air battle management system ($2.3 billion) and a distributed ground station ($1.5 billion).

Associated with this investment is a high expectation that Australian AI companies will have considerable involvement in the projects. Indeed, the government recently added AI to its set of priorities for sovereign industrial capability. The Australian defence AI sector, though, consists mainly of small and medium-sized companies that individually lack the scale to undertake major equipment projects and would need to partner with large prime contractors to achieve the requisite industrial heft.

There are also wider national concerns about whether Australia will have a large enough AI workforce over the next decade to handle commercial demands, even without Defence drawing people away for its requirements. Both factors suggest Defence could end up buying its AI offshore and rely principally on long-term foreign support, as it does for many other major equipment projects.

An alternative might be funding collaborative AI developments with the US. A harbinger of this may be the Royal Australian Navys new experimentation program involving a recently decommissioned patrol boat being fitted with Austal-developed autonomous vessel technology featuring AI. Austal is simultaneously involved in a much larger US Navy program fitting its system to one of the companys expeditionary fast transport ships, USNS Apalachicola, currently being built. In this case, Austal is an Australian company with a large US footprint and so can work collaboratively in both countries. The RAN, simply because of economies of scale, is probably more likely to adopt the US Navy variant rather than a uniquely Australian version.

The outlier to this acquisition strategy might be the Boeing Australia Ghost Bat program that could see AI-enabled, loyal wingman uncrewed air vehicles in limited ADF service in 202425, before the US. The US Air Force is running several experimentation programs aiming to develop suitable technologies, some of which also involve the Boeing parent company. Theres a high likelihood of cross-fertilisation between the Australian and US programs. This raises the tantalising possibility of a two-nation support system of a scale that would allow the Australian companies involved to grow to a size suitable for long-term sustainment of the relevant ADF AI capabilities. This possibility might be a one-off, however, as there seem to be no other significant Australian defence AI programs.

Australia collaborating with the US on AI or buying US AI products can ensure interoperability. But in seeking such an objective theres always a tension between each Australian service being interoperable with its US counterpart or instead across the ADF. This tension is likely to remain as AI enters service, especially given its demands for task-related big data.

Interoperability and domestic industry support are traditionally important issues, but they may need to be counterbalanced by emerging geostrategic uncertainties and ADF capability considerations. Australia is worried about the possibility of conflict in the Indo-Pacific region given Chinese assertiveness coupled with the example of Russias invasion of Ukraine. To offset the numerically large military forces of the more bellicose Indo-Pacific states, some advocate developing a higher quality, technologically superior ADF that can help deter regional adventurism.

In being a general-purpose technology, AI can potentially provide a boost across the whole ADF, not just one or two elements within it. But such a vision is not what is being pursued. Defences current AI plans will most likely lead to evolutionary improvements not revolutionary changes. AI is envisaged as being used to either enhance, augment or replace existing capability; this approach means the future ADF will do things better, but it wont necessarily be able to do better things.

A revolution in Australian military affairs seems unlikely under current schemes. For that, defence AI would need to be reconceptualised as a disruptive technology rather than a sustaining innovation. Embracing disruptive innovation would be intellectually demanding and, in suggesting the adoption of unproven force structures, could involve taking strategic risks. These are reasonable concerns that would need careful management.

Against such worries though, China looms large. The strategically intelligent choice for the ADF might be embracing disruptive AI.

Go here to read the rest:
The ADF could be doing much more with artificial intelligence | The Strategist - The Strategist

Why is the US following the EUs lead on artificial intelligence regulation? – The Hill

In the intensifying race for global competitiveness in artificial intelligence (AI), the United States, China and the European Union are vying to be the home of what could be the most important technological revolution of our lifetimes. AI governance proposals are also developing rapidly, with the EU proposing an aggressive regulatory approach to add to its already-onerous regulatory regime.

It would be imprudent for the U.S. to adopt Europes more top-down regulatory model, however, which already decimated digital technology innovation in the past and now will do the same for AI. The key to competitive advantage in AI will be openness to entrepreneurialism, investment and talent, plus a flexible governance framework to address risks.

The International Economyjournal recently asked 11 experts from Europe and the U.S. where the EU currently stood in global tech competition. Responses were nearly unanimous and bluntly summarized by the symposiums title: The Biggest Loser. Respondents said Europe is lagging behind in the global tech race, and unlikely to become a global hub of innovation. The future will not be invented in Europe, another analyst concluded.

This bleak assessment is due to the EUs risk-averse culture and preference for paperwork compliance over entrepreneurial freedom. After the continent piled on layers of data restrictions beginning in the mid-1990s, innovation and investment suffered. Regulation grew more complex with the 2018 General Data Protection Regulation (GDPR), which further limits data collection and use.

As a result of all the red tape, the EU came away from the digital revolution with the complete absence of superstar companies. There are no serious European versions of Microsoft, Google, Facebook, Apple or Amazon. Europes leading providers of digital technology services today are American-based companies.

Europes regulatory burdens hit small and mid-sized firms hardest. Two recent studies have documented how GDPR has come at substantial costs in foregone innovation and resulted in more concentrated market structures and entrenching the market power of those who are already strong.

The same situation is already unfolding in AI markets. Center for Data Innovation analyst Benjamin Muellernotes thatjust five of the 100 most promising AI startups are based in Europe, while private funding of AI startups in Europe for 2020 ($4 billion) was dwarfed by U.S. ($36 billion) and China ($25 billion).

Yet, European officials are doubling-down on their onerous data control regime with a variety of new laws, including the Digital Markets Act and the Digital Services Act, which are mostlymeant to hobblelarge U.S. tech companies.

Next up is a newArtificial Intelligence Act, which proposes banning some AI technologies while classifying many others under a heavily controlled high-risk category. A new European Artificial Intelligence Board will enforce a bureaucratic system of conformity assessments and impose steep fines for violations. An appendix to the AI Act contains a lengthy list of covered sectors and technologies, which the law envisions expanding in coming years. Analysts have labelled the measure the mother of all AI laws and noted how compliance with the law will impose formidable barriers to AI innovation in many sectors, scaring away investors and talent in the process.

The EUs approach will make it particularly difficult for startups to develop groundbreaking AI services. The largest network of small and medium sized enterprises (SMEs) in the European information sector, the European DIGITAL SME Alliance, says the AI Acts mandates will put a burden on AI innovation and will likely push SMEs out of the market.

The EU itself says that just the requirement to set up the quality management systems mandated by the lawwill costroughly $193,000-$330,000 upfront plus $71,400 in yearly maintenance costs. Smaller operators will struggle to absorb these burdens and other compliance requirements. This is exactly the opposite of the intention to support a thriving and innovative AI ecosystem in Europe, concludes the European Digital SME Alliance.

While it is true that the EU has emerged as the worlds most powerful, and most aggressive, tech regulator, and now seeks to become, in the words of a headline inThe Economist, the worlds super-regulator in AI, its the same strategy theyve promoted for two decades without much to show for it. If the EU succeeds in its quest to eliminate all theoretical AI risks, it will only be because they will have eliminated most of its AI innovators through complex and costly compliance mandates. And if Europes leading export is regulation instead of useful AI products, it is hard to see how that benefits the continents citizens in the long run.

Regardless, it shouldnt be the model the U.S. follows if it hopes to maintain its early lead in AI and robotics. America should instead welcome European companies, workers and investors looking for a more hospitable place to launch bold new AI innovations.

Adam Thiereris a senior research fellow at theMercatusCenter at George Mason University.

See more here:
Why is the US following the EUs lead on artificial intelligence regulation? - The Hill