Spyware and surveillance: Threats to privacy and human rights growing, UN report warns – OHCHR

GENEVA (16 September 2022) Peoples right to privacy is coming under ever greater pressure from the use of modern networked digital technologies whose features make them formidable tools for surveillance, control and oppression, a new UN report has warned. This makes it all the more essential that these technologies are reined in by effective regulation based on international human rights law and standards.

The report the latest on privacy in the digital age by the UN Human Rights Office* looks at three key areas: the abuse of intrusive hacking tools (spyware) by State authorities; the key role of robust encryption methods in protecting human rights online; and the impacts of widespread digital monitoring of public spaces, both offline and online.

The report details how surveillance tools such as the Pegasus software can turn most smartphones into 24-hour surveillance devices, allowing the intruder access not only to everything on our mobiles but also weaponizing them to spy on our lives.

While purportedly being deployed for combating terrorism and crime, such spyware tools have often been used for illegitimate reasons, including to clamp down on critical or dissenting views and on those who express them, including journalists, opposition political figures and human rights defenders, the report states.

Urgent steps are needed to address the spread of spyware, the report flags, reiterating the call for a moratorium on the use and sale of hacking tools until adequate safeguards to protect human rights are in place. Authorities should only electronically intrude on a personal device as a last resort to prevent or investigate a specific act amounting to a serious threat to national security or a specific serious crime, it says.

Encryption is a key enabler of privacy and human rights in the digital space, yet it is being undermined. The report calls on States to avoid taking steps that could weaken encryption, including mandating so-called backdoors that give access to peoples encrypted data or employing systematic screening of peoples devices, known as client-side scanning.

The report also raises the alarm about the growing surveillance of public spaces. Previous practical limitations on the scope of surveillance have been swept away by large-scale automated collection and analysis of data, as well as new digitized identity systems and extensive biometric databases that greatly facilitate the breadth of such surveillance measures.

New technologies have also enabled the systematic monitoring of what people are saying online, including through collecting and analysing social media posts.

Governments often fail to adequately inform the public about their surveillance activities, and even where surveillance tools are initially rolled out for legitimate goals, they can easily be repurposed, often serving ends for which they were not originally intended.

The report emphasises that States should limit public surveillance measures to those strictly necessary and proportionate, focused on specific locations and time. The duration of data storage should similarly be limited. There is also an immediate need to restrict the use of biometric recognition systems in public spaces.

All States should also act immediately to put in place robust export control regimes for surveillance technologies that pose serious risks to human rights. They should also ensure human rights impact assessments are carried out that take into account what the technologies in question are capable of, as well as the situation in the recipient country.

Digital technologies bring enormous benefits to societies. But pervasive surveillance comes at a high cost, undermining rights and choking the development of vibrant, pluralistic democracies, said Acting High Commissioner for Human Rights Nada Al-Nashif.

In short, the right to privacy is more at risk than ever before, she stressed. This is why action is needed and needed now.

See the original post:
Spyware and surveillance: Threats to privacy and human rights growing, UN report warns - OHCHR

Microchip Unveils Industrys First Terabit-Scale Secure Ethernet PHY Family with Port Aggregation for Enterprise and Cloud Interconnect – Yahoo Finance

Microchip Technology Inc.

META-DX2+ enables OEMs to double router and switch system capacities with 112G PAM4 connectivity for 800G ports, adds encryption and Class C/D precision timing

CHANDLER, Ariz., Sept. 19, 2022 (GLOBE NEWSWIRE) -- The demand for increased bandwidth and security in network infrastructure driven by growth in hybrid work and geographical distribution of networks is redefining borderless networking. Led by AI/ML applications, the total port bandwidth for 400G (gigabits per second) and 800G is forecasted to grow at an annual rate of over 50%, according to 650 Group. This dramatic growth is expanding the transition to 112G PAM4 connectivity beyond just cloud data center and telecom service provider switches and routers to enterprise Ethernet switching platforms. Microchip Technology Inc. (NASDAQ: MCHP) is responding to this market inflection with the META-DX2 Ethernet PHY (physical layer) portfolio by introducing a new family of META-DX2+ PHYs. These are the industrys first solution set to integrate 1.6T (terabits per second) of line-rate end-to-end encryption and port aggregation to maintain the most compact footprint in the transition to 112G PAM4 connectivity for enterprise Ethernet switches, security appliances, cloud interconnect routers and optical transport systems.

The introduction of four new META-DX2+ Ethernet PHYs demonstrates our commitment to supporting the industry transition to 112G PAM4 connectivity powered by our META-DX retimer and PHY portfolio. In conjunction with our META-DX2L retimer, we now offer a complete chipset for all connectivity needs from retiming, gearboxing, to advanced PHY functionality, said Babak Samimi, corporate vice president of Microchips communications business unit. By offering both hardware and software footprint compatibility, our customers can leverage architectural designs across their enterprise, data center, and service provider switching and routing systems that can offer pay-as-you-need enablement of advanced features including end-to-end security, multi-rate port aggregation, and precision timestamping via a software subscription model.

The META-DX2+s configurable 1.6T datapath architecture outperforms the next near competitors by 2x in total gearbox capacity and hitless 2:1 protection switch mux modes enabled by its unique ShiftIO capability. The flexible XpandIO port aggregation capabilities optimize router/switch port utilization when supporting low-rate traffic. Also, the devices include IEEE 1588 Class C/D Precision Time Protocol (PTP) support for accurate nanosecond timestamping required for 5G and enterprise business critical services. By offering a portfolio of footprint-compatible retimer and advanced PHYs with encryption options, Microchip enables developers to expand their designs to add MACsec and IPsec based on a common board design and Software Development Kit (SDK).

META-DX2+ differentiated capabilities include:

Dual 800 GbE, quad 400 GbE and 16x 100/50/25/10/1 GbE MAC/PHY

Integrated 1.6T MACsec/IPsec engines that offload encryption from packet processors so systems can more easily scale up to higher bandwidths with end-to-end security

Greater than 20% board savings compared to competing solutions that require two devices to deliver the same 1.6T gearbox and hitless 2:1 mux modes

XpandIO enables port aggregation of low-rate Ethernet clients over higher speed Ethernet interfaces, optimized for enterprise platforms

ShiftIO feature combined with a highly configurable integrated crosspoint enables flexible connectivity between external switches, processors, and optics

Device variants with 48 or 32 Long Reach (LR) capable 112G PAM4 SerDes including programmability to optimize power vs. performance

Support for Ethernet, OTN, Fibre Channel and proprietary data rates for AI/ML applications

As the industry transitions to a 112G PAM4 serial ecosystem for high-density routers and switches, line-rate encryption and efficient use of port capacity becomes increasingly important, said Alan Weckel, founder and technology analyst at 650 Group, LLC. Microchips META-DX2+ family will play an important role in enabling MACsec and IPsec encryption, optimizing port capacity with port aggregation, and flexibly connecting routing/switching silicon to multi-rate 400G and 800G optics.

Like the META-DX2L retimer, the new series of META-DX2+ PHYs can be used with Microchips PolarFire FPGAs, the ZL30632 high-performance PLL, oscillators, voltage regulators, and other components that have been pre-validated as a system to help speed designs into production.

Development Tools

Microchips second-generation Ethernet PHY SDK for the META-DX2 family lowers development costs with field-proven API libraries and firmware. The SDK supports all META-DX2L and META-DX2+ PHY devices within the product family. Support for the Open Compute Project (OCP) Switch Abstraction Interface (SAI) PHY extensions are included to enable agnostic support of the META-DX2 PHYs within a wide range of Network Operating Systems (NOS) that support SAI.

Availability

The META-DX2+ family is expected to sample during the fourth calendar quarter of 2022. For additional information visit the META-DX2+ webpage or contact a Microchip sales representative.

See the META-DX2L Ethernet PHY at ECOC 2022

Microchip will be exhibiting the META-DX2L PHY device, which started sampling in the fourth quarter of 2021, in the Optical Internetworking Forum (OIF) booth at the European Conference on Optical Communication (ECOC) September 18-22, 2022, in Basel Switzerland. Microchip and other OIF members will be showcasing how multi-vendor interoperability is accelerating industry solutions for the global network in booth #701 at the Congress Center Basel.

Resources

High-res images available through Flickr or editorial contact (feel free to publish): Application image: http://www.flickr.com/photos/microchiptechnology/52336953308/sizes/l/

About Microchip Technology

Microchip Technology Inc. is a leading provider of smart, connected and secure embedded control solutions. Its easy-to-use development tools and comprehensive product portfolio enable customers to create optimal designs which reduce risk while lowering total system cost and time to market. The company solutions serve more than 120,000 customers across the industrial, automotive, consumer, aerospace and defense, communications and computing markets. Headquartered in Chandler, Arizona, Microchip offers outstanding technical support along with dependable delivery and quality. For more information, visit the Microchip website at http://www.microchip.com.

Note: The Microchip name and logo and the Microchip logo are registered trademarks of Microchip Technology Incorporated in the U.S.A. and other countries. All other trademarks mentioned herein are the property of their respective companies.

Read more:
Microchip Unveils Industrys First Terabit-Scale Secure Ethernet PHY Family with Port Aggregation for Enterprise and Cloud Interconnect - Yahoo Finance

Empress EMS Announces Data Breach Leaking the Sensitive Information of 318,558 People – JD Supra

On September 9, 2022, Empress EMS reported a data breach with the U.S. Department of Health and Human Services Office for Civil Rights after the company was the victim of what appears to have been a ransomware attack. According to Empress EMS, the breach resulted in the names, Social Security numbers, dates of service and insurance information of 318,558 patients being compromised. Recently, Empress EMS sent out data breach letters to all affected parties, informing them of the incident and what they can do to protect themselves from identity theft and other frauds.

News of the Empress EMS comes from the companys official filing with the U.S. Department of Health and Human Services Office for Civil Rights as well as a notice posted on the companys website. According to these sources, on July 14, 2022, Empress EMS detected a network security incident, apparently when some or all of the companys computer system was encrypted. In response, the company reported the incident to law enforcement, secured its systems, and began working with third-party data security experts to conduct an investigation.

The companys investigation confirmed that an unauthorized party first gained access to the Empress EMS system on May 26, 2022 and subsequently copied files from the network on July 13, 2022.

Upon discovering that sensitive consumer data was accessible to an unauthorized party, Empress EMS then reviewed the affected files to determine what information was compromised and which consumers were impacted. While the breached information varies depending on the individual, it may include your name, the date you received service from Empress EMS, your Social Security number, and your insurance information.

On September 9, 2022, Empress EMS sent out data breach letters to all individuals whose information was compromised as a result of the recent data security incident. According to the U.S. Department of Health and Human Services Office for Civil Rights, these letters were sent out to 318,558 people. Empress EMS is offering all people impacted by the breach with free credit monitoring and is recommending they review their healthcare statements for accuracy and contact their provider if they see services they did not receive.

Founded in 1985, Empress EMS is an ambulance services company based in Yonkers, New York. The company provides 911 emergency medical response transportation to Yonkers and neighboring communities. Additionally, Empress EMS has emergency and non-emergency response contracts throughout Westchester County with districts, hospitals, correctional institutions and private care facilities. Empress EMS employs more than 204 people and generates approximately $17 million in annual revenue.

The Empress EMS filing with the U.S. Department of Health and Human Services Office for Civil Rights did not get into too much detail about the nature of the breach. However, the company provided some additional information in a letter posted on the Empress EMS website. There, the company noted that the data breach was caused by a network incident resulting in the encryption of some of our systems.

Encryption is a process that encodes files, making them inaccessible to anyone without the encryption key (which is usually a password). People encrypt files every day to protect sensitive data from unauthorized access. However, cybercriminals also use encryption when carrying out certain types of cyberattacksusually ransomware attacks.

A ransomware attack is a type of cyberattack that occurs when a hacker or other bad actor installs malware on a companys computer network. Hackers frequently do this by sending a phishing email to an employee in hopes of getting them to click on a malicious link. Once the employee clicks on the link, it downloads the malware onto their computer. The malware then encrypts the files on the computer and may infect other parts of the network. The hackers then send management a message, demanding it pays a ransom if it wants access to its network. Once the company pays the ransom, the hackers decrypt their computer, which ends the attackat least from the companys perspective.

However, more recently hackers have started to threaten to publish any stolen data if a company refuses to pay the ransom. Once on the dark web, cybercriminals can bid on the data, which they can then use to commit identity theft and other frauds. Of course, while companies that are targeted in a ransomware attack are victims in some sense, the real victims of these attacks are the consumers whose information ends up in the hands of those looking to commit fraud.

So, while Empress EMS did not mention the words ransomware attack in its communications, because we know it involved the encryption of the companys system, there is a good chance that this was caused by a ransomware attack.

Companies not only have the resources to pay an occasional ransom, but they also have the ability (and responsibility) to implement strong data security systems designed to prevent these attacks in the first place. Victims of a data breach who would like to learn how to reduce the risk of identity theft or learn about their options to hold the company that leaked their information accountable should contact a data breach lawyer as soon as possible.

If you are one of the more than 318,000 people who were affected by the Empress EMS data breach, it is imperative that you understand what is at stake and how you can mitigate these risks. If you or a loved one received services from Empress EMS and have not yet received a letter, you can review a copy of the letter here.

Read the original here:
Empress EMS Announces Data Breach Leaking the Sensitive Information of 318,558 People - JD Supra

The Week in Ransomware – September 16th 2022 – Iranian Sanctions – BleepingComputer

It has been a fairly quiet week on the ransomware front, with the biggest news being US sanctions on Iranians linked to ransomware attacks.

On Wednesday, the US Treasury Department announced sanctions against Iranians affiliated with Iran's Islamic Revolutionary Guard Corps (IRGC) for their breaching of US networks and encrypting devices with DiskCryptor and BitLocker.

Researchers also released some interesting reports this week:

In ransomware attack-related news, the Yanluowang ransomware gang began leaking data stolen during a cyberattack on Cisco and the Hive ransomware claimed an attack on Bell Technical Solutions (BTS).

Contributors and those who provided new ransomware information and stories this week include: @jorntvdw, @demonslay335, @serghei, @malwareforme, @malwrhunterteam, @BleepinComputer, @LawrenceAbrams, @Seifreed, @DanielGallagher, @VK_Intel, @FourOctets, @billtoulas, @struppigel, @PolarToffee, @fwosar, @Ionut_Ilascu, @Bitdefender, @AlvieriD, @AWNetworks, @LabsSentinel, @pcrisk, @CISAgov, and @security_score, @censysio, and @juanbrodersen.

A growing number of ransomware groups are adopting a new tactic that helps them encrypt their victims' systems faster while reducing the chances of being detected and stopped.

But recently, Censys has observed a massive uptick in Deadbolt-infected QNAP devices. The Deadbolt crew is ramping up their operations, and the victim count is growing daily.

Cisco has confirmed that the data leaked yesterday by the Yanluowang ransomware gang was stolen from the company network during a cyberattack in May.

The Lorenz ransomware gang now uses a critical vulnerability in Mitel MiVoice VOIP appliances to breach enterprises, using their phone systems for initial access to their corporate networks.

PCrisk found new STOP ransomware variants that append the .eemv and .eewt extensions to encrypted files.

PCrisk found the new Scam Ransomware that appends the .scam extension to encrypted files and drops a ransom note named read_it.txt.

PCrisk found the new Babuk ransomware variant that appends the .demon extension to encrypted files and drops a ransom note named How To Recover Your Files.txt.

The Treasury Department's Office of Foreign Assets Control (OFAC) announced sanctions today against ten individuals and two entities affiliated with Iran's Islamic Revolutionary Guard Corps (IRGC) for their involvement in ransomware attacks.

The Legislature of the City of Buenos Aires is slowly recovering from the cyberattack it suffered last Sunday : after changing passwords and disconnecting infected computers, they re-enabled WiFi , recovered one computer per area and continued with parliamentary work. However, they do not disclose what information was compromised or what type of attack it was.

This advisory updates joint CSA Iranian Government-Sponsored APT Cyber Actors Exploiting Microsoft Exchange and Fortinet Vulnerabilities in Furtherance of Malicious Activities, which provides information on these Iranian government-sponsored APT actors exploiting known Fortinet and Microsoft Exchange vulnerabilities to gain initial access to a broad range of targeted entities in furtherance of malicious activities, including ransom operations. The authoring agencies now judge these actors are an APT group affiliated with the IRGC.

PCrisk found a new Dharma ransomware variant that appends the .gnik extension to encrypted files.

PCrisk found a new STOP ransomware variant that appends the .eeyu extension to encrypted files.

PCrisk found a new Snatch ransomware variant that appends the .winxvykljw extension to encrypted files.

The Hive ransomware gang claimed responsibility for an attack that hit the systems of Bell Canada subsidiary Bell Technical Solutions (BTS).

Quantum ransomware, a rebrand of the MountLocker ransomware, was discovered in August 2021. The malware stops a list of processes and services, and can encrypt the machines found in the Windows domain or the local network, as well as the network shared resources. It logs all of its activities in a file called .log and computes a Client Id that is the XOR-encryption of the computer name.

PCrisk found a new STOP ransomware variant that appends the .eebn extension to encrypted files.

PCrisk found the BISAMWARE Ransomware that appends the .BISAMWARE and drops a ransom note named SYSTEM=RANSOMWARE=INFECTED.TXT.

Romanian cybersecurity firm Bitdefender has released a free decryptor to help LockerGoga ransomware victims recover their files without paying a ransom.

The rest is here:
The Week in Ransomware - September 16th 2022 - Iranian Sanctions - BleepingComputer

Wanted: artificial intelligence (AI) and machine learning to help humans and computers work together – Military & Aerospace Electronics

ARLINGTON, Va. U.S. military researchers are asking industry to develop computers able not only to analyze large amounts of data automatically, but also communicate and cooperate with humans to resolve ambiguities and improve performance over time.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., issued a broad agency announcement (HR001122S0052) on Thursday for the Environment-driven Conceptual Learning (ECOLE) project.

From industry, the DARPA ECOLE project seeks proposals in five areas: human language technology; computer vision; artificial intelligence (AI); reasoning; and human-computer interaction.

ECOLE will create AI agents able to learn from linguistic and visual input to enable humans and computers to work together to analyze image, video, and multimedia documents quickly in missions where reliability and robustness are essential.

Related: Military researchers to apply artificial intelligence (AI) and machine learning to combat medical triage

ECOLE will develop algorithms that can identify, represent, and ground the attributes that form the symbolic and contextual model for a particular object or activity through interactive machine learning with a human analyst. Knowledge of attributes and affordances, learned dynamically from data encountered within an analytic workflow, will enable joint reasoning with a human partner.

This acquired knowledge also will enable the machine to recognize never-before-seen objects and activities without misclassifying them as a member of a previously learned class, detect changes in known objects, and report these changes when they are significant.

System interaction with human intelligence analysts is expected to be symbiotic, with the systems augmenting human cognitive capabilities while simultaneously seeking instruction and correction to achieve accuracy.

Industry proposals should specify how symbolic knowledge representations will be acquired from unlabeled data, including the specifics of the learning mechanism; how these representations will be associated and reasoned within a growing body of knowledge; how the representations will be applied to human-interpretable object and activity recognition; and how the framework will permit collaboration with several analysts to resolve ambiguity, extend the set of known representations, and provide greater recognitional accuracy and coverage.

Related: Artificial intelligence (AI) to enable manned and unmanned vehicles adapt to unforeseen events like damage

The four-year ECOLE project with three phases; this solicitation concerns only the first and second phases. The first phase will create prototype agents that can pull relevant information out of unlabeled multimedia data, supplemented with human interaction.

These prototypes will demonstrate not only the ability to learn new concepts, but also to recombine previously learned attributes to recognize never-before-seen objects and activities. Systems also will be able to reason over similarities and differences in objects and activities.

The second phase of the ECOLE project will scale-up the framework to include several AI agents and human analysts to help deal with uncertain or contradictory information.

Computer interaction with human analysts will enable the system to learn to name and describe objects, actions, and properties to verify and augment their representations, and to acquire complex knowledge quickly and accurately from potentially sparse observations.

Related: Wanted: artificial intelligence (AI) and machine autonomy algorithms for military command and control

Humans and computers will work together primarily through the English language -- including words with several different meanings -- in a way that is readily understandable. The ECOLE project also will have two technical areas: distributed curriculum learning; and human-machine collaborative analysis.

Distributed curriculum learning involves multimedia data, and will use human partners provide feedback on the learning process. human-machine collaborative analysis will involve a human-machine interface (HMI) to improve ECOLE representations and analyze data such as multimedia and social media.

Companies interested should upload abstracts no later than 29 Sept. 2022, and full proposals by 14 Nov. 2022 to the DARPA BAA website at https://baa.darpa.mil.

Email questions or concerns to DARPA at ECOLE@darpa.mil. More information is online at https://sam.gov/opp/fd50cb65daf5493d886fa1ddc2c0dd77/view.

See the article here:
Wanted: artificial intelligence (AI) and machine learning to help humans and computers work together - Military & Aerospace Electronics

Machine Learning Isnt Magic It Needs Strategy And A Human Touch – AdExchanger

By AdExchanger Guest Columnist

Data-Driven Thinking is written by members of the media community and contains fresh ideas on the digital revolution in media.

Todays column is written by Jasmine Jia, associate director of data science at Blockthrough.

The term machine learning seems to have a magical effect as a sales buzzword. Couple that with the term data science, and lots of companies think they have a winning formula for attracting new clients.

Is it smoke and mirrors? Often, the answer is yes.

What is quite real though is the need for best practices in data science and for companies to invest in and fully support talent that can apply those principles effectively.

Laying the foundation for machine learning

Machine learning success starts with hiring talent that can harness machine learning a team of skilled data scientists which is very expensive. Adding to the cost is time. It takes a lot of it to build a data science team and integrate them with other teams across operations.

A successful machine learning pipeline requires data cleaning, data exploration, feature extraction, model building, model validation and more. You also need to keep maintaining and evolving that pipeline. And not only is the cost high, but companies also rarely have the patience and time to manage this process and still meet their ROI objectives.

Defining best practices

With the right talent and pipeline in place, the next step is establishing best practices. This is vital. Machine learning depends on how you implement it, what problem you use it to solve, and how you deeply integrate it with your company.

To paint a picture of how things can go wrong just think about the times that imbalanced data sets led to what the media called racist robots and automated racism. Or, on a lighter note, how about those memes showing machine learning confusing blueberry muffins with Chihuahuas. Or mixing up images of bagels with pics of curled-up puppies?

Best practices can prevent some of these common pitfalls, but its essential to define them for the entirety of the data analysis process: before decisioning, during decisioning and after decisioning.

Lets take this step by step.

Before: It is all too common for companies to update an offering by adding a feature. But often they do so before completing meaningful data collection and analysis. Nobody has taken the time and resources to answer, Why are we adding this feature?

Before answering that all-important question, other questions need to be addressed. Are you seeing users doing this behavior naturally, already? What will the potential lift be? Is it worth the expense and time to tap into your engineering resources? What is the expected impact? What would this new feature ultimately mean to the future success of this product?

Youll need a lot of data to answer those queries. But lets say you culled it all and decided it was worthwhile to move ahead.

During: Youve launched that feature. There should be an ongoing stream of data that demonstrates whether or not the new feature is driving impact at the network level, at the publisher level, and at the user level.

Are you seeing the same impact across the board? Sometimes benefits to one can hurt another. Attention must be paid. Factor analysis is key. What are the factors at play that impact the analysis? Once identified, you need to determine if they are physically significant or not.

After: At this point, there are even more questions to address. What exactly is the impact? If you use A/B testing, can those short-term experiments provide dependable long-term forecasts? What lessons can you learn? Whether its a failure or success, how can it keep evolving? What are the new opportunities? What are the new behavioral changes youre seeing.

Machine learning for the long haul

There is a lot of data and oversight required to make a machine learning program truly viable. Its no wonder that many dont have the wherewithal to properly execute it and reap the benefits.

Here is the kicker: the data team doesnt make the decisions. The machine learning algorithm doesnt make the decisions. People make decisions. You can hire a fantastic squad of data scientists, and they can build and refine a machine learning model based on gobs of data that is 100% accurate. But for it to make any sort of difference to your business, you need to develop a strong workflow around it.

The best way to do that? Make sure data science teams are deeply integrated with different teams throughout your organization.

Establish a well-grounded data science practice, and you will see that machine learning can make the magic happen.

Follow Blockthrough (@blockthrough) and AdExchanger (@adexchanger) on Twitter.

See more here:
Machine Learning Isnt Magic It Needs Strategy And A Human Touch - AdExchanger

Putting artificial intelligence and machine learning workloads in the cloud – ComputerWeekly.com

Artificial intelligence (AI) and machine learning (ML) are some of the most hyped enterprise technologies and have caught the imagination of boards, with the promise of efficiencies and lower costs, and the public, with developments such as self-driving cars and autonomous quadcopter air taxis.

Of course, the reality is rather more prosaic, with firms looking to AI to automate areas such as online product recommendations or spotting defects on production lines. Organisations are using AI in vertical industries, such as financial services, retail and energy, where applications include fraud prevention and analysing business performance for loans, demand prediction for seasonal products and crunching through vast amounts of data to optimise energy grids.

All this falls short of the idea of AI as an intelligent machine along the lines of 2001: A Space Odysseys HAL. But it is still a fast-growing market, driven by businesses trying to drive more value from their data, and automate business intelligence and analytics to improve decision-making.

Industry analyst firm Gartner, for example, predicts that the global market for AI software will reach US$62bn this year, with the fastest growth coming from knowledge management. According to the firm, 48% of the CIOs it surveyed have already deployed artificial intelligence and machine learning or plan to do so within the next 12 months.

Much of this growth is being driven by developments in cloud computing, as firms can take advantage of the low initial costs and scalability of cloud infrastructure. Gartner, for example, cites cloud computing as one of five factors driving AI and ML growth, as it allows firms to experiment and operationalise AI faster with lower complexity.

In addition, the large public cloud providers are developing their own AI modules, including image recognition, document processing and edge applications to support industrial and distribution processes.

Some of the fastest-growing applications for AI and ML are around e-commerce and advertising, as firms look to analyse spending patterns and make recommendations, and use automation to target advertising. This takes advantage of the growing volume of business data that already resides in the cloud, cutting out the costs and complexity associated with moving data.

The cloud also lets organisations make use of advanced analytics and compute facilities, which are often not cost-effective to build in-house. This includes the use of dedicated, graphics processing units (GPUs) and extremely large storage volumes made possible by cloud storage.

Such capabilities are beyond the reach of many organisations on-prem offerings, such as GPU processing. This demonstrates the importance of cloud capability in organisations digital strategies, says Lee Howells, head of AI at advisory firm PA Consulting.

Firms are also building up expertise in their use of AI through cloud-based services. One growth area is AIOps, where organisations use artificial intelligence to optimise their IT operations, especially in the cloud.

Another is MLOps, which Gartner says is the operationalisation of multiple AI models, creating composite AI environments. This allows firms to build up more comprehensive and functional models from smaller building blocks. These blocks can be hosted on on-premise systems, in-house, or in hybrid environments.

Just as cloud service providers offer the building blocks of IT compute, storage and networking so they are building up a range of artificial intelligence and machine learning models. They are also offering AI- and ML-based services which firms, or third-party technology companies, can build into their applications.

These AI offerings do not need to be end-to-end processes, and often they are not. Instead, they provide functionality that would be costly or complex for a firm to provide itself. But they are also functions that can be performed without compromising the firms security or regulatory requirements, or that involve large-scale migration of data.

Examples of these AI modules include image processing and image recognition, document processing and analysis, and translation.

We operate within an ecosystem. We buy bricks from people and then we build houses and other things out of those bricks. Then we deliver those houses to individual customers, says Mika Vainio-Mattila, CEO at Digital Workforce, a robotic process automation (RPA) company. The firm uses cloud technologies to scale up its delivery of automation services to its customers, including its robot as a service, which can run either on Microsoft Azure or a private cloud.

Vainio-Mattila says AI is already an important part of business automation. The one that is probably the most prevalent is intelligent document processing, which is basically making sense of unstructured documents, he says.

The objective is to make those documents meaningful to robots, or automated digital agents, that then do things with the data in those documents. That is the space where we have seen most use of AI tools and technologies, and where we have applied AI ourselves most.

He sees a growing push from the large public cloud companies to provide AI tools and models. Initially, that is to third-party software suppliers or service providers such as his company, but he expects the cloud solution providers (CSPs) to offer more AI technology directly to user businesses too.

Its an interesting space because the big cloud providers spearheaded by Google obviously, but very closely followed by Microsoft and Amazon, and others, IBM as well have implemented services around ML- and AI-based services for deciphering unstructured information. That includes recognising or classifying photographs or, or translation.

These are general-purpose technologies designed so that others can reuse them. The business applications are frequently very use-case specific and need experts to tailor them to a companys business needs. And the focus is more on back-office operations than applications such as driverless cars.

Cloud providers also offer domain-specific modules, according to PA Consultings Howells. These have already evolved in financial services, manufacturing and healthcare, he says.

In fact, the range of AI services offered in the cloud is wide, and growing. The big [cloud] players now have models that everyone can take and run, says Tim Bowes, associate director for data engineering at consultancy Dufrain. Two to three years ago, it was all third-party technology, but they are now building proprietary tools.

Azure, for example, offers Azure AI, with vision, speech, language and decision-making AI models that users can access via AI calls. Microsoft breaks its offerings down into Applied AI Services, Cognitive Services, machine learning and AI infrastructure.

Google offers AI infrastructure, Vertex AI, an ML platform, data science services, media translation and speech to text, to name a few. Its Cloud Inference API lets firms work with large datasets stored in Googles cloud. The firm, unsurprisingly, provides cloud GPUs.

Amazon Web Services (AWS) also provides a wide range of AI-based services, including image recognition and video analysis, translation, conversational AI for chatbots, natural language processing, and a suite of services aimed at developers. AWS also promotes its health and industrial modules.

The large enterprise software and software-as-a-service (SaaS) providers also have their own AI offerings. These include Salesforce (ML and predictive analytics), Oracle (ML tools including pre-trained models, computer vision and NLP) and IBM (Watson Studio and Watson Services). IBM has even developed a specific set of AI-based tools to help organisations understand their environmental risks.

Specialist firms include H2O.ai, UIPath, Blue Prism and Snaplogic, although the latter three could be better described as intelligent automation or RPA companies than pure-play AI providers.

It is, however, a fine line. According to Jeremiah Stone, chief technology officer (CTO) at Snaplogic, enterprises are often turning to AI on an experimental basis, even where more mature technology can be more appropriate.

Probably 60% or 70% of the efforts Ive seen are, at least initially, starting out exploring AI and ML as a way to solve problems that may be better solved with more well-understood approaches, he says. But that is forgivable because, as people, we continually have extreme optimism for what software and technology can do for us if we didnt, we wouldnt move forward.

Experimentation with AI will, he says, bring longer-term benefits.

There are other limitations to AI in the cloud. First and foremost, cloud-based services are best suited to generic data or generic processes. This allows organisations to overcome the security, privacy and regulatory hurdles involved in sharing data with third parties.

AI tools counter this by not moving data they stay in the local business application or database. And security in the cloud is improving, to the point where more businesses are willing to make use of it.

Some organisations prefer to keep their most sensitive data on-prem. However, with cloud providers offering industry-leading security capabilities, the reason for doing this is rapidly reducing, says PA Consultings Howells.

Nonetheless, some firms prefer to build their own AI models and do their own training, despite the cost. If AI is the product and driverless cars are a prime example the business will want to own the intellectual property in the models.

But even then, organisations stand to benefit from areas where they can use generic data and models. The weather is one example, image recognition is potentially another.

Even firms with very specific demands for their AI systems might benefit from the expansive data resources in the cloud for model training. Potentially, they might also want to use cloud providers synthetic data, which allows model training without the security and privacy concerns of data sharing.

And few in the industry would bet against those services coming, first and foremost, from the cloud service providers.

See the rest here:
Putting artificial intelligence and machine learning workloads in the cloud - ComputerWeekly.com

Using AI, machine learning and advanced analytics to protect and optimize business – Security Magazine

Using AI, machine learning and advanced analytics to protect and optimize business | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Read the original here:
Using AI, machine learning and advanced analytics to protect and optimize business - Security Magazine

7 Machine Learning Portfolio Projects to Boost the Resume – KDnuggets

There is a high demand for machine learning engineer jobs, but the hiring process is tough to crack. Companies want to hire professionals with experience in dealing with various machine learning problems.

For a newbie or fresh graduate, there are only a few ways to showcase skills and experience. They can either get an internship, work on open source projects, volunteer in NGO projects, or work on portfolio projects.

In this post, we will be focusing on machine learning portfolio projects that will boost your resume and help you during the recruitment process. Working solo on the project also makes you better at problem-solving.

mRNA Degradation project is a complex regression problem. The challenge in this project is to predict degradation rates that can help scientists design more stable vaccines in the future.

The project is 2 years old, but you will learn a lot about solving regression problems using complex 3D data manipulation and deep learning GRU models. Furthermore, we will be predicting 5 targets: reactivity, deg_Mg_pH10, deg_Mg_50C, deg_pH10, deg_50C.

Automatic Image Captioning is the must-have project in your resume. You will learn about computer vision, CNN pre-trained models, and LSTM for natural language processing.

In the end, you will build the application on Streamlit or Gradio to showcase your results. The image caption generator will generate a simple text describing the image.

You can find multiple similar projects online and even create your deep learning architecture to predict captions in different languages.

The primary purpose of the portfolio project is to work on a unique problem. It can be the same model architecture but a different dataset. Working with various data types will improve your chance of getting hired.

Forecasting using Deep Learning is a popular project idea, and you will learn many things about time series data analysis, data handling, pre-processing, and neural networks for time-series problems.

The time series forecasting is not simple. You need to understand seasonality, holiday seasons, trends, and daily fluctuation. Most of the time, you dont even require neural networks, and simple linear regression can provide you with the best-performing model. But in the stock market, where the risk is high, even a one percent difference means millions of dollars in profit for the company.

Having a Reinforcement Learning project on your resume gives you an edge during the hiring process. The recruiter will assume that you are good at problem-solving and you are eager to expand your boundaries to learn about complex machine learning tasks.

In the Self-Driving car project, you will train the Proximal Policy Optimization (PPO) model in the OpenAI Gym environment (CarRacing-v0).

Before you start the project, you need to learn the fundamentals of Reinforcement Learning as it is quite different from other machine learning tasks. During the project, you will experiment with various types of models and methodologies to improve agent performance.

Conversational AI is a fun project. You will learn about Hugging Face Transformers, Facebook Blender Bot, handling conversational data, and creating chatbot interfaces (API or Web App).

Due to the huge library of datasets and pre-trained models available on Hugging Face, you can basically finetune the model on a new dataset. It can be Rick and Morty conversation, your favorite film character, or any celebrity that you love.

Apart from that you can improve the chatbot for your specific use case. In case of medical application. The chatbot needs technical knowledge and understands the patient's sentiment.

Automatic Speech Recognition is my favorite project ever. I have learned everything about transformers, handling audio data, and improving the model performance. It took me 2 months to understand the fundamentals and another two to create the architecture that will work on top of the Wave2Vec2 model.

You can improve the model performance by boosting Wav2Vec2 with n-grams and text pre-processing. I have even pre-processed the audio data to improve the sound quality.

The fun part is that you can fine-tune the Wav2Vec2 model on any type of language.

End-to-end machine learning project experience is a must. Without it, your chance of getting hired is pretty slim.

You will learn:

The main purpose of this project is not about building the best model or learning new deep learning architecture. The main goal is to familiarize the industry standards and techniques for building, deploying, and monitoring machine learning applications. You will learn a lot about development operations and how you can create a fully automated system.

After working on a few projects, I will highly recommend you create a profile on GitHub or any code-sharing site where you can share your project findings and documentation.

The principal purpose of working on a project is to improve your odds of getting hired. Showcasing the projects and presenting yourself in front of a potential recruiter is a skill.

So, after working on a project, start promoting it on social media, create a fun web app using Gradio or Streamlit, and write an engaging blog. Dont think about what people are going to say. Just keep working on a project and keep sharing. And I am sure in no time multiple recruiters will approach you for the job.

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

See the original post here:
7 Machine Learning Portfolio Projects to Boost the Resume - KDnuggets

Astera Labs to Host Mayor of Burnaby at Grand Opening Of New Vancouver Design Center and Lab Dedicated to Purpose-Built Connectivity Solutions for…

--(BUSINESS WIRE)--Astera Labs Inc. :

WHEN:

Wednesday, September 21, 2022, from 9:30 a.m.-11:30 a.m. PDT

WHERE:

Astera Labs Vancouver4370 Dominion StreetBurnaby, BC V5G 4L7Canada

WHO:

WHAT:

Astera Labs welcomes the Mayor of Burnaby and the Burnaby Board of Trade President and CEO to celebrate the grand opening of its new state-of-the-art design center and lab in the Greater Vancouver Area.

Astera Labs Vancouver will support the companys development of cutting-edge interconnect technologies for Artificial Intelligence and Machine Learning architectures in the Cloud. The rapidly growing semiconductor company chose the Vancouver area to tap into the regions rich technology talent base to drive product development, customer support and marketing. The Vancouver location increases the companys operations in Canada, which already includes the new Research and Development Design Center in Toronto, and adds to its global footprint with headquarters in Santa Clara, California and offices around the globe.

Astera Labs is actively hiring across multiple engineering and marketing disciplines to support end-to-end product and application development and overall go-to-market operations. Open positions can be found at http://www.AsteraLabs.com/Careers/.

The ribbon cutting and photo opportunity with Burnaby Officials and Astera Labs Executives will be held outdoors. Below is an overview of the event agenda:

Event Schedule

Formal Remarks

9:30 a.m. 10:00 a.m. PDT

Ribbon Cutting / Photo Op / Media Q&A

10:00 a.m. 10:30 a.m. PDT

Indoor Reception

10:30 a.m. 11:30 a.m. PDT

For onsite assistance, contact Dave Nelson at (604) 418-9930.

About Astera Labs

Astera Labs Inc. is a leader in purpose-built data and memory connectivity solutions to remove performance bottlenecks throughout the data center. With locations worldwide, the companys silicon, software, and system-level connectivity solutions help realize the vision of Artificial Intelligence and Machine Learning in the Cloud through CXL, PCIe, and Ethernet technologies. For more information about Astera Labs including open positions, visit http://www.AsteraLabs.com.

See the rest here:
Astera Labs to Host Mayor of Burnaby at Grand Opening Of New Vancouver Design Center and Lab Dedicated to Purpose-Built Connectivity Solutions for...