Speed-up hyperparameter tuning in deep learning with Keras hyperband tuner – Analytics India Magazine

The performance of machine learning algorithms is heavily dependent on selecting a good collection of hyperparameters. The Keras Tuner is a package that assists you in selecting the best set of hyperparameters for your application. The process of finding the optimal collection of hyperparameters for your machine learning or deep learning application is known as hyperparameter tuning. Hyperband is a framework for tuning hyperparameters which helps in speeding up the hyperparameter tuning process. This article will be focused on understanding the hyperband framework. Following are the topics to be covered in this article.

Hyperparameters are not model parameters and cannot be learned directly from data. When we optimize a loss function with something like gradient descent, we learn model parameters during training. Lets talk about Hyperband and try to understand the need for its creation.

The approach of tweaking hyperparameters of machine learning algorithms is known as hyperparameter optimization (HPO). Excellent machine learning algorithms feature various, diverse, and complicated hyperparameters that produce a massive search space. Deep learning is used as the basis of many start-up processes, and the search space for deep learning methods is considerably broader than for typical ML algorithms. Tuning on a large search space is a difficult task. Data-driven strategies must be used to tackle HPO difficulties. Manual approaches do not work.

Are you looking for a complete repository of Python libraries used in data science,check out here.

By defining hyperparameter optimization as a pure-exploration adaptive resource allocation issue addressing how to distribute resources among randomly chosen hyperparameter configurations, a novel configuration assessment technique was devised. This is known as a Hyperband setup. It allocates resources using a logical early-stopping technique, allowing it to test orders of magnitude more configurations than black-box processes such as Bayesian optimization methods. Unlike previous configuration assessment methodologies, Hyperband is a general-purpose tool that makes few assumptions.

The capacity of Hyperband to adapt to unknown convergence rates and the behaviour of validation losses as a function of the hyperparameters was proved by the developers in the theoretical study. Furthermore, for a range of deep-learning and kernel-based learning issues, Hyperband is 5 to 30 times quicker than typical Bayesian optimization techniques. In the non-stochastic environment, Hyperband is one solution with properties similar to the pure-exploration, infinite-armed bandit issue.

Hyperparameters is input to a machine learning algorithm that governs the performance generalization of the algorithm to unseen data. Due to the growing number of tuning parameters associated with these models are difficult to set by standard optimization techniques.

In an effort to develop more efficient search methods, Bayesian optimization approaches that focus on optimizing hyperparameter configuration selection have lately dominated the subject of hyperparameter optimization. By picking configurations in an adaptive way, these approaches seek to discover good configurations faster than typical baselines such as random search. These approaches, however, address the fundamentally difficult problem of fitting and optimizing a high-dimensional, non-convex function with uncertain smoothness and perhaps noisy evaluations.

The goal of an orthogonal approach to hyperparameter optimization is to accelerate configuration evaluation. These methods are computationally adaptive, providing greater resources to promising hyperparameter combinations while swiftly removing bad ones. The size of the training set, the number of features, or the number of iterations for iterative algorithms are all examples of resources.

These techniques seek to analyze orders of magnitude more hyperparameter configurations than approaches that evenly train all configurations to completion, hence discovering appropriate hyperparameters rapidly. The hyperband is designed to accelerate the random search by providing a simple and theoretically sound starting point.

Hyperband calls the SuccessiveHalving technique introduced for hyperparameter optimization a subroutine and enhances it. The original Successive Halving method is named from the theory behind it: uniformly distribute a budget to a collection of hyperparameter configurations, evaluate the performance of all configurations, discard the worst half, and repeat until only one configuration remains. More promising combinations receive exponentially more resources from the algorithm.

The Hyperband algorithm is made up of two parts.

Each loop that executes the SuccessiveHalving within Hyperband is referred to as a bracket. Each bracket is intended to consume a portion of the entire resource budget and corresponds to a distinct tradeoff between n and B/n. As a result, a single Hyperband execution has a limited budget. Two inputs are required for hyperband.

The two inputs determine how many distinct brackets are examined; particularly, various configuration settings. Hyperband starts with the most aggressive bracket, which configures configuration to maximize exploration while requiring that at least one configuration be allotted R resources. Each consecutive bracket decreases the number of configurations by a factor until the last bracket, which allocates resources to all configurations. As a result, Hyperband does a geometric search in the average budget per configuration, eliminating the requirement to choose the number of configurations for a set budget at a certain cost.

Since the arms are autonomous and sampled at random, the hyperband has the potential to be parallelized. The simplest basic parallelization approach is to distribute individual Successive Halving brackets to separate computers. With this article, we have understood bandit-based hyperparameter tuning algorithm and its variation from bayesian optimization.

Original post:
Speed-up hyperparameter tuning in deep learning with Keras hyperband tuner - Analytics India Magazine

How to get started with machine learning and AI – Ars Technica

Enlarge / "It's a cookbook?!"

Aurich Lawson | Getty Images

Back in the 1950s, in the earliest days of what we now call artificial intelligence, there was a debate over what to name the field. Herbert Simon, co-developer of both the logic theory machine and the General Problem Solver, argued that the field should have the much more anodyne name of complex information processing. This certainly doesnt inspire the awe that artificial intelligence does, nor does it convey the idea that machines can think like humans.

However, "complex information processing" is a much better description of what artificial intelligence actually is: parsing complicated data sets and attempting to make inferences from the pile. Some modern examples of AI include speech recognition (in the form of virtual assistants like Siri or Alexa) and systems that determine what's in a photograph or recommend what to buy or watch next. None of these examples are comparable to human intelligence, but theyshow we can do remarkable things with enough information processing.

Whether we refer to this field as "complex information processing" or "artificial intelligence" (or the more ominously Skynet-sounding "machine learning") is irrelevant. Immense amounts of work and human ingenuity have gone into building some absolutely incredible applications. As an example, look atGPT-3, a deep-learning model for natural languages that can generate text that is indistinguishable from text written by a person (yet can also go hilariously wrong). It's backed by a neural network model that uses more than 170 billion parameters to model human language.

Built on top of GPT-3 is the tool named Dall-E,which will produce an image of any fantastical thing a user requests. The updated 2022 version of the tool, Dall-E 2, lets you go even further, as it can understand styles and concepts that are quite abstract.For instance, asking Dall-E to visualize an astronaut riding a horse in the style of Andy Warhol will produce a number of images such as this:

Dall-E 2 does not perform a Google search to find a similar image; it creates a picture based on its internal model. This is a new image built from nothing but math.

Not all applications of AI are as groundbreaking as these. AI and machine learning are finding uses in nearly every industry. Machine learning is quickly becoming a must-have in many industries, powering everything from recommendation engines in the retail sector to pipeline safety in the oil and gas industry and diagnosis and patient privacy in the health care industry. Not every company has the resources to create tools like Dall-E from scratch, so there's a lot of demand for affordable, attainable toolsets.The challenge of filling that demand has parallels to the early days of business computing, when computers and computer programs were quickly becoming the technology businesses needed.While not everyone needs to develop the next programming language or operating system, many companies want to leverage the power of these new fields of study, and they need similar tools to help them.

More here:
How to get started with machine learning and AI - Ars Technica

How AI and Machine Learning Are Ready To Change the Game for Data Center Operations – Data Center Knowledge

Todays data centers face a challenge that, initially, looks like its almost impossible to resolve. While operations have never been busier, teams are pressured to reduce their facilities energy consumption as part of corporate carbon reduction goals. And, as if that wasnt difficult enough, dramatically rising electricity prices are placing real stress on data center budgets.

With data centers focused on supporting the essential technology services that people increasingly demand to support their personal and professional lives, its not surprising that data center operations have never been busier. Driven by trends that show no sign of slowing down, were seeing massively increased data usage associated with video, storage, compute demands, smart IoT integrations, as well as 5G connectivity rollouts. However, despite these escalating workloads, the unfortunate reality is that many of todays critical facilities simply arent running efficiently enough.

Given that the average data center operates for over 20 years, this shouldnt really be a surprise. Efficiency is invariably tied to a facilitys original design - and based on expected IT loads that have long been overtaken. At the same time change is a constant factor, with platforms, equipment design, topologies, power density requirements and cooling demands all evolving with the continued drive for new applications. The result is a global data center infrastructure that regularly finds it hard to match current and planned IT loads to their critical infrastructure. This will only be exacerbated as data center demands increase, with analyst projections suggesting that workload volumes are set to continue growing at around 20% a year between now and 2025.

Traditional data center approaches are struggling to meet these escalating requirements. Prioritizing availability is largely achieved at efficiencys expense, with too much reliance still placed on operator experience and trusting that assumptions are correct. Unfortunately, the evidence suggests that this model is no longer realistic. EkkoSense research reveals an average figure of 15% of IT racks in data centers operating outside of ASHRAEs temperature and humidity guidelines, and that customers strand up to 60% of their cooling capacity due to inefficiencies. And thats a problem, with Uptime Institute estimating that the global value attributed to inefficient cooling and airflow management is around $18bn. Thats equivalent to some 150bn wasted kilowatt hours.

With 35% of the energy used in a data center utilized to support the cooling infrastructure, its clear that traditional performance optimization approaches are missing a huge opportunity to unlock efficiency improvements. EkkoSense data indicates that a third of unplanned data center outages are triggered by thermal issues. Finding a different way to manage this problem can provide operations teams with a great way to secure both availability and efficiency improvements.

Limitations of traditional monitoringUnfortunately, only around 5% of M&E teams currently monitor and report their data center equipment temperatures on a rack-by-rack basis. Additionally, DCIM and traditional monitoring solutions can provide trend data and be set up to provide alerts when breaches occur, but that is where they stop. They lack the analytics to provide deeper insite into the cause of the issues and how both to resolve them and avoid them in the future.

Operations teams recognize that this kind of traditional monitoring has its limitations, but they also know that they simply dont have the resources and time to take the data they have and convert it from background noise into meaningful actions. The good news is that technology solutions are now available to help data centers tackle this problem.

It's time for data centers to go granular with machine learning and AIThe application of machine learning and AI creates a new paradigm in terms of how to approach data center operations. Instead of being swamped by too much performance data, operations teams can now take advantage of machine learning to gather data at a much more granular level meaning they can start to access how their data center is performing in real-time. The key is to make this accessible, and using smart 3D visualizations is a great way of making it easy for data center teams to interpret performance data at a deeper level: for example, by showing changes and highlighting anomalies.

The next stage is to apply machine learning and AI analytics to provide actionable insights. By augmenting measured datasets with machine learning algorithms, data center teams can immediately benefit from easy-to-understand insights to help support their real-time optimization decisions. The combination of real-time granular data collection every five minutes and AI/machine learning analytics allows operations not just to see what is happening across their critical facilities but also find out why and what exactly they should do about it.

AI and machine learning powered analytics can also uncover the insights required to recommend actionable changes across key areas such as optimum set points, floor grille layouts, cooling unit operation and fan speed adjustments. Thermal analysis will also indicate optimum rack locations. And because AI enables real-time visualizations, data center teams can quickly gain immediate performance feedback on any actioned changes.

Helping data center operations to make an immediate difference Given pressure to reduce carbon consumption and minimize the impact of electricity price increases, data center teams need new levels of optimization support if they are to deliver against their reliability and efficiency goals.

Taking advantage of the latest machine learning and AI-powered data center optimization approaches can certainly make a difference by cutting cooling energy and usage with results achievable within weeks. Bringing granular data to the forefront of their optimization plans, data center teams have already been able to not only remove thermal and power risk, but also secure reductions in cooling energy consumption costs and carbon emmissions by an average of 30%. Its hard to ignore the impact these kind of savings can have particularly during a period of rapid electricity price increases. The days of trading off risk and availability for optimization is a thing of the past with power of AI and Machine learning at the forefront of operating your data center.

Related: Scale Your Machine Learning with MLOps

Want to know more? Register for Wednesday's AFCOMwebinar on the subject here.

About the author

Tracy Collins is Vice President of EkkoSense Americas, the company that enables true M&E capacity planning for power, cooling and space. He was previously CEO at Simple Helix, a leading AL-based Tier III data center operator.

Tracy has over 25 years in-depth data center industry experience, having previously served as Vice President of IT Solutions for Vertiv and, before that, with Emerson Network Power. In his role, Tracy is committed to challenging traditional approaches to data center management particularly in terms of solving the optimization challenge of balancing increased data center workloads while also delivering against corporate energy saving targets.

Read this article:
How AI and Machine Learning Are Ready To Change the Game for Data Center Operations - Data Center Knowledge

Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference – Financial Post

Article content

MONTREAL On July 19th, Behavox will host a conference to share the next generation of artificial intelligence in Compliance and Security with clients, regulators, and industry leaders.

The Behavox AI in Compliance and Security Conference will be held at the company HQ in Montreal. With this exclusive in-person conference, Behavox is relaunching its pre-COVID tradition of inviting customers, regulators, AI industry leaders, and partners to its Montreal HQ to deep dive into workshops and keynote speeches on compliance, security, and artificial intelligence.

Were extremely excited to relaunch our tradition of inviting clients to our offices in order to learn directly from the engineers and data scientists behind our groundbreaking innovations, said Chief Customer Intelligence Officer Fahreen Kurji. Attendees at the conference will get to enjoy keynote presentations as well as Innovation Paddocks where you can test drive our latest innovations and also spend time networking with other industry leaders and regulators.

Keynote presentations will cover:

The conference will also feature Innovation Paddocks where guests will be able to learn more from the engineers and data scientists behind Behavox innovations. At this conference, Behavox will demonstrate its revolutionary new product Behavox Quantum. There will be test drives and numerous workshops covering everything from infrastructure for cloud orchestration to the AI engine at the core of Behavox Quantum.

Whats in it for participants?

Behavox Quantum has been rigorously tested and benchmarked against existing solutions in the market and it outperformed competition by at least 3,000x using new AI risk policies, providing a holistic security program to catch malicious, immoral, and illegal actors, eliminating fraud and protecting your digital headquarters.

Attendees at the July 19th conference will include C-suite executives from top global banks, financial institutions, and corporations with many prospects and clients sending entire delegations to the conference. Justin Trudeau, Canadian Prime Minister, will give the commencement speech at the conference in recognition/ celebration of the world leading AI innovations coming out of Canada.

This is a unique opportunity to test drive the product and meet the team behind the innovations as well as network with top industry professionals. Register here for the Behavox AI in Compliance and Security Conference.

About Behavox Ltd.

Behavox provides a suite of security products that help compliance, HR, and security teams protect their company and colleagues from business risks.

Through AI-powered analysis of all corporate communications, including email, instant messaging, voice, and video conferencing platforms, Behavox helps organizations identify illegal, immoral, and malicious behavior in the workplace.

Founded in 2014, Behavox is headquartered in Montreal and has offices in New York City, London, Seattle, Singapore, and Tokyo.

More information about the company is available at https://www.behavox.com/.

View source version on businesswire.com: https://www.businesswire.com/news/home/20220628006051/en/

Contacts

Press: media@behavox.com

#distro

Link:
Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference - Financial Post

5 Top Deep Learning Trends in 2022 – Datamation

Deep learning (DL) could be defined as a form of machine learning based on artificial neural networks which harness multiple processing layers in order to extract progressively better and more high-level insights from data. In essence it is simply a more sophisticated application of artificial intelligence (AI) platforms and machine learning (ML).

Here are some of the top trends in deep learning:

Model Scale Up

A lot of the excitement in deep learning right now is centered around scaling up large, relatively general models (now being called foundation models). They are exhibiting surprising capabilities such as generating novel text, images from text, and video from text. Anything that scales up AI models adds yet more capabilities to deep learning. This is showing up in algorithms that go beyond simplistic responses to multi-faceted answers and actions that dig deeper into data, preferences, and potential actions.

Scale Up Limitations

However, not everyone is convinced that the scaling up of neural networks is going to continue to bear fruit. Roadblocks may lie ahead.

There is some debate about how far we can get in terms of aspects of intelligence with scaling alone, said Peter Stone, PhD, Executive Director, Sony AI America.

Current models are limited in several ways, and some of the community is rushing to point those out. It will be interesting to see what capabilities can be achieved with neural networks alone, and what novel methods will be uncovered for combining neural networks with other AI paradigms.

AI and Model Training

AI isnt something you plug in and, presto, instant insights. It takes time for the deep learning platform to analyze data sets, spot patterns, and begin to derive conclusions that have broad applicability in the real world. The good news is that AI platforms are rapidly evolving to keep up with model training demands.

Instead of weeks to learn enough to begin to function, AI platforms are undergoing fundamental innovation, and are rapidly reaching the same maturity level as data analytics. As datasets become larger, deep learning models become more resource-intensive, requiring a lot of processing power to predict, validate, and recalibrate millions of times. Graphics Processing Units (GPUs) are advancing to handle this computing and AI platforms are evolving to keep up with model training demands.

Organizations can enhance their AI platforms by combining open-source projects and commercial technologies, said Bin Fan, VP Open Source and Founding Engineer atAlluxio.

It is essential to consider skills, speed of deployment, the variety of algorithms supported, and the flexibility of the system while making decisions.

Containerized Workloads

Deep learning workloads are increasingly containerized, further supporting autonomous operations, said Fan. Container technologies enable organizations to have isolation, portability, unlimited scalability, and dynamic behavior in MLOps. Thus, AI infrastructure management would become more automated, easier, and more business-friendly than before.

Containerization being the key, Kubernetes will aid cloud-native MLOps in integrating with more mature technologies, said Fan.

To keep up with this trend, organizations can find their AI workloads running on more flexible cloud environments in conjunction with Kubernetes.

Prescriptive Modeling over Predictive Modeling

Modeling has gone through many phases over the last many years. Initial attempts tried to predict trends from historical data. This had some value, but didnt take into account factors such as context, sudden traffic spikes, and shifts in market forces. In particular, real-time data played no real part in early efforts at predictive modeling.

As unstructured data became more important, organizations wanted to mine it to glean insight. Coupled with the rise in processing power, suddenly real time analysis rose to prominence. And the immense amounts of data generated by social media has only added to the need to address real time information.

How does this relate to AI, deep learning, and automation?

Many of the current and previous industry implementations of AI have relied on the AI to inform a human of some anticipated event, who then has the expert knowledge to know what action to take, said Frans Cronje, CEO and Co-founder of DataProphet.

Increasingly, providers are moving to AI that can anticipate a future event and take the correspondent action.

This opens the door to far more effective deep learning networks. With real time data being constantly used by multi-layered neural networks, AI can be utilized to take more and more of the workload away from humans. Instead of referring the decision to a human expert, deep learning can be used to prescribe predicted decisions based on historical, real-time, and analytical data.

Link:
5 Top Deep Learning Trends in 2022 - Datamation

Companies In The Lawful Interception Market Are Adopting AI, Machine Learning, And Blockchain Technologie – Benzinga

LONDON, June 28, 2022 (GLOBE NEWSWIRE) -- According to The Business Research Company's research report on the lawful interception market, leveraging Artificial Intelligence (AI), machine learning, and blockchain technologies for cyber defense is a key trend in the lawful interception market. Lawful interception providers integrate AI and machine learning principles into their solutions to tackle crucial hyper-connected workplace risks with quicker identification, prevention, and responsive capabilities. Advances in technology, such as AI and machine learning, turn the tables on cybercrime. For example, Equifax experienced cyber-attacks which resulted in the loss of sensitive information from more than 140 million American customers. The stolen information included names, addresses, social security numbers, birth dates, and driver's license numbers. Cybersecurity specialists are therefore leveraging AI and machine learning technology to resolve the emerging cyber threats facing individuals, companies, and governments. According to a recent Research and Markets study, the demand for artificial intelligence in cyber security is expected to reach $38.2 billion by 2026.

Request for a sample of the global lawful interception market report

The global lawful interception market size is expected to grow from $2.96 billion in 2021 to $3.57 billion in 2022 at a compound annual growth rate (CAGR) of 20.64%. The growth in the market is mainly due to the companies resuming their operations and adapting to the new normal while recovering from the COVID-19 impact, which had earlier led to restrictive containment measures involving social distancing, remote working, and the closure of commercial activities that resulted in operational challenges. The global lawful interception market share is expected to reach $7.86 billion in 2026 at a CAGR of 21.84%.

The increasing number of cybercrimes is expected to propel the growth of the lawful interception market. Cybercrimes are defined as the increasing number of cyber-attacks through various social media platforms, the internet, and hacking software. The increased cybercrimes are responsible for the growth of lawful interceptions as they are a key tool for identifying crimes. As per the Internet Crime Report 2021, published by the Federal Bureau of Investigation (FBI) in the U.S., there were approximately 791,790 complaints of suspected internet crimean increase of more than 300,000 complaints from 2019. For instance, in 2021, Tessian Research, a data loss prevention (DLP) on email company, found that employees received an average of 14 malicious emails per year. Phishing is the most popular type of cybercrime in which criminals seek to gain sensitive information by sending phone emails or messages. Therefore, increasing cybercrimes drive the lawful interception market.

Major players in the lawful interception market are Utimaco, Vocal Technologies, AQSACOM, Verint, BAE Systems, SS8 Networks, Signalogic, IPS S.P.A, Tracespan, Accuris Networks, EVE Compliancy Solutions, Squire Technologies, Incognito Software, Incognito Software, Net Optics, and Ixia.

The global lawful interception market analysis is segmented by device into mediation devices, routers, intercept access point (IAP), gateways, switches, management servers, others; by network technology into Voice-Over-Internet Protocol (VoIP), Long Term Evolution (LTE), Wireless Local Area Network (WLAN), Worldwide Interoperability for Microwave Access (WiMAX), Digital Subscriber Line (DSL), Public Switched Telephone Network (PSTN), Integrated Services for Digital Network (ISDN), others; by communication content into voice communication, video, text messaging, facsimile, digital pictures, file transfer; by end user into lawful enforcement agencies, government.

North America was the largest region in the lawful interception market in 2021. Asia Pacific is expected to be the fastest-growing region in the forecast period. The regions covered in the lawful interception industry report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, the Middle East, and Africa.

Lawful Interception Global Market Report 2022 Market Size, Trends, And Global Forecast 2022-2026 is one of a series of new reports from The Business Research Company that provide lawful interception market overviews, lawful interception market analyze and forecast market size and growth for the whole market, lawful interception market segments and geographies, lawful interception market trends, lawful interception market drivers, lawful interception market restraints, lawful interception market leading competitors' revenues, profiles and market shares in over 1,000 industry reports, covering over 2,500 market segments and 60 geographies.

The report also gives in-depth analysis of the impact of COVID-19 on the market. The reports draw on 150,000 datasets, extensive secondary research, and exclusive insights from interviews with industry leaders. A highly experienced and expert team of analysts and modelers provides market analysis and forecasts. The reports identify top countries and segments for opportunities and strategies based on market trends and leading competitors' approaches.

Not the market you are looking for? Check out some similar market intelligence reports:

Cybersecurity Global Market Report 2022 By Solution (Network Security, Cloud Application Security, End-Point Security, Secure Web Gateway, Internet Security), By Enterprise Size (Small & Medium Enterprise, Large Enterprise), By Deployment Type (Cloud, On Premises), By End-Use (BFSI, IT & Telecommunications, Retail, Healthcare, Government, Manufacturing, Travel And Transportation, Energy And Utilities) Market Size, Trends, And Global Forecast 2022-2026

Blockchain AI Global Market Report 2022 By Technology (Computer Vision, Machine Learning (ML), Natural Language Processing (NLP)), By Vertical (BFSI, Telecom & IT, Healthcare And Life Science, Manufacturing, Media & Environment, Automotive), By Application (Smart Contract, Payment, Data Security, Logistics And Supply Chain Management, Business Process Optimization) Market Size, Trends, And Global Forecast 2022-2026

Voice Over Internet Protocol (VoIP) Global Market Report 2022 - By Type (Integrated Access Or Session Initiation Protocol (SIP) Trunking, Managed IP PBX, Hosted IP PBX), By Access Type (Phone To Phone, Computer To Computer, Computer To Phone), By Call Type (International VoIP Calls, Domestic VoIP Calls), By Medium (Fixed, Mobile), By End User (Consumers, Small And Medium Businesses, Large Enterprises) - Market Size, Trends, And Global Forecast 2022 2026

Interested to know more about The Business Research Company?

The Business Research Company is a market intelligence firm that excels in company, market, and consumer research. Located globally it has specialist consultants in a wide range of industries including manufacturing, healthcare, financial services, chemicals, and technology.

The World's Most Comprehensive Database

The Business Research Company's flagship product, Global Market Model, is a market intelligence platform covering various macroeconomic indicators and metrics across 60 geographies and 27 industries. The Global Market Model covers multi-layered datasets which help its users assess supply-demand gaps.

Read more:
Companies In The Lawful Interception Market Are Adopting AI, Machine Learning, And Blockchain Technologie - Benzinga

Machine learning hiring levels in the medical industry fell to a year-low in April 2022 – Medical Device Network

The proportion of medical companies hiring for machine learning related positions dropped in April 2022 compared with the equivalent month last year, with 28.3% of the companies included in our analysis recruiting for at least one such position.

This latest figure was lower than the 32.9% of companies who were hiring for machine learning related jobs a year ago and a decrease compared to the figure of 38.4% in March 2022.

When it came to the rate of all job openings that were linked to machine learning, related job postings rose in April 2022, with 0.9% of newly posted job advertisements being linked to the topic.

This latest figure was the highest monthly figure recorded in the past year and is an increase compared to the 0.8% of newly advertised jobs that were linked to machine learning in the equivalent month a year ago.

Machine learning is one of the topics that GlobalData, from whom our data for this article is taken, have identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that medical companies are currently hiring for machine learning jobs at a rate lower than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 1.3% in April 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

You can keep track of the latest data from this database as it emerges by visiting our live dashboard here.

Precision Components for the Medical Industry

Read more here:
Machine learning hiring levels in the medical industry fell to a year-low in April 2022 - Medical Device Network

Madrona and PitchBook Partner to Bring Machine Intelligence to the Intelligent Applications 40 (#IA40) List – Business Wire

SEATTLE--(BUSINESS WIRE)--Madrona, a leading venture investor in artificial intelligence and machine learning companies, today announced a partnership with PitchBook to power the 2022 Intelligent Applications 40, #IA40, and released data based on the 2021 list. Leveraging PitchBooks industry-leading data as well as a new machine learning model, PitchBook and Madrona provide differentiated analysis on the market outlook for intelligent applications.

According to PitchBook, IA40 companies have, in aggregate, raised over $3 billion in new rounds of financing since the launch of the inaugural IA40 list in late November 2021. Additionally, despite the current market turmoil, these companies have announced over $848 in new venture financing in the second quarter further reinforcing the promising market outlook for intelligent applications long term. The IA40 companies will need to navigate the same challenging market conditions faced by all VC-backed startups, but they show promise when applying the PitchBook predictive algorithm:

We are excited to bring PitchBook on board as we look to the #IA40 2022 to be released this fall. Madrona has been investing in the founders and teams building intelligent applications for over ten years. We believe machine intelligence is the future of software, commented Ishani Ummat, Investor at Madrona. What better way to help generate a meaningful list of intelligent app companies than to leverage machine learning and predictive software in the process?

Madrona launched the inaugural IA40 with support from Goldman Sachs and 50 of the nations top venture firms in the fall of 2021. A ranking of the top 40 intelligent application companies, the list spans early to late-stage private companies across all industries. Intelligent apps harness machine learning to process historical and real-time data to create a continuous learning system. Companies on the inaugural list include Starburst, Gong, Hugging Face, OctoML, SeekOut and Abnormal Security. See the full list at http://www.ia40.com

PitchBook is well-known for delivering timely, comprehensive, and transparent data on private and public equity markets collected through its proprietary information infrastructure. In addition to distributing data and research, PitchBooks Institutional Research Group also develops tools and models that help clients make more informed investment and business development decisions. The algorithm powering the 2022 IA 40 list is part of a larger initiative that will enable PitchBook users to predict liquidity events for private companies and will be launched later this year.

At PitchBook, were constantly expanding our data and research across all asset classes and building tools to actively surface insights for our clients. Combining our data and insights with machine learning capabilities, were in a unique position to predict outcomes and enhance decision-making for our core clients. Our work with Madrona and the IA 40 is a powerful example of the possibilities associated with intelligent applications and applying the technology to lead to better outcomes for our industry, commented Daniel Cook, CFA and Head of Quantitative Research at PitchBook.

Read more on our blog about the partnership.

Interested in the IA40 founders and companies? Check out our podcasts with founders from Starburst, RunwayML, Hugging Face, OctoML, and SeekOut, with more on the way! https://www.madrona.com/category/podcast/

About Madrona

Madrona (www.madrona.com) is a venture capital firm based in Seattle, WA. With more than 25 years of investing in early stage technology companies, the firm has worked with founders from Day One to help build their company for the long run. Madrona invests predominantly in seed and Series A rounds across the information technology spectrum, and in 2018 raised the first fund dedicated to initial investments in acceleration stage (Series B and C stages) companies. Madrona manages over $2 billion and was an early investor in companies such as Amazon, Smartsheet, Isilon, Redfin, and Snowflake.

Original post:
Madrona and PitchBook Partner to Bring Machine Intelligence to the Intelligent Applications 40 (#IA40) List - Business Wire

Ruth Mayes Walks Through the Ins and Outs of Machine Learning – Seed World

How much do you know about machine learning and how it can be applied to plant breeding? Its a complicated subject, but Computomics, a bioninformatics data analysis company for plant breeding based in Germany, sat down at the International Seed Federations World Seed Congress to help us understand more about it.

Computomics was founded 10 years ago, and our co-founder was one of the first scientists to apply machine learning capabilities to biological datasets, says Ruth Mayes, director of global business strategy of Computomics.

And now, 10 years down the line from its founding, Computomics offers an innovative predictive breeding technology which allows breeders to identify genetics within a crops germplasm, find a target and breed forward.

We take field data and correlating genetic markers to build a model using machine learning. And we look at combinations of genetic markers, how these markers combine together, and how it influences the phenotype. says Ruth.

But why is machine learning a game changer for plant breeding?

It allows the breeder to go away from just testing his germplasm to actually understanding all the elite genetics within his germplasm, Mayes says. This allows him to really define a target which is a trait or feature and really breed towards it

Make sure to visit Computomics website to learn more about their innovative machine learning technology to help plant breeders achieve best possible future crop varieties.

Originally posted here:
Ruth Mayes Walks Through the Ins and Outs of Machine Learning - Seed World

How we learned to break down barriers to machine learning – Ars Technica

Dr. Sephus discusses breaking down barriers to machine learning at Ars Frontiers 2022. Click here for transcript.

Welcome to the week after Ars Frontiers! This article is the first in a short series of pieces that will recap each of the day's talks for the benefit of those who weren't able to travel to DC for our first conference. We'll be running one of these every few days for the next couple of weeks, and each one will include an embedded video of the talk (along with a transcript).

For today's recap, we're going over our talk with Amazon Web Services tech evangelist Dr. Nashlie Sephus. Our discussion was titled "Breaking Barriers to Machine Learning."

Dr. Sephus came to AWS via a roundabout path, growing up in Mississippi before eventually joining a tech startup called Partpic. Partpic was an artificial intelligence and machine-learning (AI/ML) company with a neat premise: Users could take photographs of tooling and parts, and the Partpic app would algorithmically analyze the pictures, identify the part, and provide information on what the part was and where to buy more of it. Partpic was acquired by Amazon in 2016, and Dr. Sephus took her machine-learning skills to AWS.

When asked, she identified accessasthe biggest barrier to the greater use of AI/MLin a lot of ways, it's another wrinkle in the old problem of the digital divide. A core component of being able to utilize most common AI/ML tools is having reliable and fast Internet access, and drawing on experience from her background, Dr. Sephus pointed out that a lack of access to technology in primary schools in poorer areas of the country sets kids on a path away from being able to use the kinds of tools we're talking about.

Furthermore, lack of early access leads to resistance to technology later in life. "You're talking about a concept that a lot of people think is pretty intimidating," she explained. "A lot of people are scared. They feel threatened by the technology."

One way of tackling the divide here, in addition to simply increasing access, is changing the way that technologists communicate about complex topics like AI/ML to regular folks. "I understand that, as technologists, a lot of times we just like to build cool stuff, right?" Dr. Sephus said. "We're not thinking about the longer-term impact, but that's why it's so important to have that diversity of thought at the table and those different perspectives."

Dr. Sephus said that AWS has been hiring sociologists and psychologists to join its tech teams to figure out ways to tackle the digital divide by meeting people where they are rather than forcing them to come to the technology.

Simply reframing complex AI/ML topics in terms of everyday actions can remove barriers. Dr. Sephus explained that one way of doing this is to point out that almost everyone has a cell phone, and when you're talking to your phone or using facial recognition to unlock it, or when you're getting recommendations for a movie or for the next song to listen tothese things are all examples of interacting with machine learning. Not everyone groks that, especially technological laypersons, and showing people that these things are driven by AI/ML can be revelatory.

"Meeting them where they are, showing them how these technologies affect them in their everyday lives, and having programming out there in a way that's very approachableI think that's something we should focus on," she said.

Read this article:
How we learned to break down barriers to machine learning - Ars Technica