Kauricone: Machine learning tackles the mundane, making our lives easier – IT Brief New Zealand

A New Zealand startup producing its own servers is expanding into the realm of artificial intelligence, creating machine learning solutions that carry out common tasks while relieving people of repetitive, unsatisfying work. Having spotted an opportunity for the development of low-cost, high-efficiency and environmentally sustainable hardware, Kauricone has more recently pivoted in a fascinating direction: creating software that thinks about mundane problems, so we don't have to. These tasks include identifying trash for improved recycling, looking' at items on roads for automated safety, pest identification and in the ultimate alleviation of a notoriously sleep-inducing task counting sheep.

Managing director, founder and tech industry veteran Mike Milne says Kauricone products include application servers, cluster servers and internet of things servers. It was in this latter category that the notion emerged of applying machine learning at the network's edge.

Having already developed low-cost-low power edge hardware, we realised there was a big opportunity for the application of smart computing in some decidedly not-so-enjoyable everyday tasks, relates Milne. After all, we had all the basic building blocks already: the hardware, the programming capability, and with good mobile network coverage, the connectivity.

Situation

Work is just another name for tasks people would rather not do themselves, or that we cannot do for ourselves. And despite living in a fabulously advanced age, there is a persistent reality of all manner of tasks which must be done every day, but which don't require a particularly high level of engagement or even intelligence.

It is these tasks for which machine learning (ML) is quite often a highly promising solution. ML collects and analyses data by applying statistical analysis, and pattern matching, to learn from past experiences. Using the trained data, it provides reliable results, and people can stop doing the boring work, says Milne.

There is in fact more to it than meets the eye (so to speak) when it comes to computer image recognition. That's why Capcha' challenges are often little more than Identify all the images containing traffic lights': because distinguishing objects is hard for bots. ML overcomes the challenge through the training' mentioned by Milne: the computer is shown thousands of images and learns which are hits, and which are misses.

Potentially, there are as many use cases as you have dull but necessary tasks in the world, Milne notes. So far, we've tackled a few. Rocks on roads are dangerous, but monitoring thousands of kilometers of tarmac comes at a cost. Construction waste is extensive, bad for the environment and should be managed better. Sheep are plentiful and not always in the right paddock. And pests put New Zealand's biodiversity at risk.

Solution

Tackling each of these problems, Kauricone started with its own-developed RISC IoT server hardware as the base. Running Ubuntu and programmed with Python or other open-source languages, the servers typically feature 4GB memory and 128GB solid state storage, the solar-powered edge devices consume as little as 3 watts and run indefinitely on a single solar panel. This makes for a reliable, low-cost field-ready' device, says Milne.

The Rocks on Roads project made clear the challenges of simple' image identification, with Kauricone eventually running a training model around the clock for 8 days, gathering 35,000 iterations of rock images, which expanded to 3,000,000 identifiable traits (bear in mind, a human identifies a rock almost instantly, perhaps faster if hurled). With this training, the machine became very good at detecting rocks on the roads.

For a new project involving construction waste, the Kauricone IoT server will maintain a vigilant watch on the types and amounts of waste going into building-site skips. Trained to identify types of waste, the resulting data will be the basis for improving waste management and recycling or redirecting certain items for more responsible disposal.

Counting sheep isn't only a method for accelerating sleep time, it's also an essential task for farmers across New Zealand. That's not all as an ML exercise, it anticipates the potential for smarter stock management, as does the related pest identification test case pursued by Kauricone. The ever-watchful camera and supporting hardware manage several tasks: identifying individual animals, numbering them, and also monitoring grass levels, essential for ovine nourishment. Tested so far on a small flock, this application is ready for scale.

Results

Milne says the small test cases pursued by Kauricone to date are just the beginning and anticipates considerable potential for ML applications across all walks of life. There is literally no end to the number of daily tasks where computer vision and ML can alleviate our workload and contribute to improved efficiency and, ultimately, a better and more sustainable planet, he notes.

The Rocks on Roads project promises improved safety with a lower human' overhead, reducing or eliminating the possibility of human error. Waste management is a multifaceted problem, where the employment of personnel is rendered difficult owing to simple economics (and potentially stultifying work); New Zealand's primary sector is ripe for technologically powered performance improvements which could boost already impressive productivity through automation and improved control; and pest management can help the Department of Conservation and allied parties achieve better results using fewer resources.

It's early days yet, says Milne, But the results from these exploratory projects are promising. With the connectivity of ever-expanding cellular and low-power networks like SIGFOX and LoraWan, the enabling infrastructure is increasingly available even in remote places. And purpose-built low power hardware brings computing right to the edge. Now, it's just a matter of identifying opportunities and creating the applications.

For more information visit Kauricone's website.

Read the rest here:
Kauricone: Machine learning tackles the mundane, making our lives easier - IT Brief New Zealand

5 Ways Data Scientists Can Advance Their Careers – Spiceworks News and Insights

Data and machine learning people join companies with the promise of cutting-edge ML models and technology. But often, they spend 80% of their time cleaning data or dealing with data riddled with missing values and outliers, a frequently changing schema, and massive load times. The gap between expectation and reality can be massive.

Although data scientists might initially be excited to tackle insights and advanced models, that enthusiasm quickly deflates amidst daily schema changes, tables that stop updating, and other surprises that silently break models and dashboards.

While data science applies to a range of roles, from product analytics to putting statistical models in production, one thing is usually true: data scientists and ML engineers often sit at the tail end of the data pipeline. Theyre data consumers, pulling it from data warehouses or S3 or other centralized sources. They analyze data to help make business decisions or use it as training inputs for machine learning models.

In other words, theyre impacted by data quality issues but arent often empowered to travel up the pipeline earlier to fix them. So they write a ton of defensive data preprocessing into their work or move on to a new project.

If this scenario sounds familiar, you dont have to give up or complain that the data engineering upstream is forever broken. Make like a scientist and get experimental. Youre the last step in the pipe and putting models into production, which means youre responsible for the outcome. While this might sound terrifying or unfair, its also a brilliant opportunity to shine and make a big difference in your teams business impact.

Here are five things data scientists and ML analysts get out of defense mode and ensure that even if they didnt create data quality issues, theyd prevent them from impacting the teams that rely on data.

Business executives hesitate to make decisions based on data alone. A KPMG report showed that 60% of companies dont feel very confident in their data, and 49% of leadership teams didnt fully support the internal data and analytics strategy.

Good data scientists and ML engineers can help by increasing data accuracy, then getting it into dashboards that help key decision-makers. In doing so, theyll have a direct positive impact. But manually checking data for quality issues is error-prone and a huge drag on your velocity. It slows you down and makes you less productive.

Using data quality testing (e.g. with dbt tests) and data observability helps to ensure you find out about quality issues before your stakeholders do, winning their trust in you (and the data) over time.

Data quality problems can easily lead to an annoying blame game between data science, data engineering, and software engineering. Who broke the data? And who knew? And who is going to fix it?

But when bad data goes into the world, its everyones fault. Your stakeholders want the data to work so that the business can move forward with an accurate picture.

Good data scientists and ML engineers build accountability for all data pipeline steps with Service Level Agreements. SLAs define data quality in quantifiable terms, assigning responders who should spring into action to fix problems. SLAs help avoids the blame game entirely.

Trust is so fragile, and it erodes quickly when your stakeholders catch mistakes and start blaming. But what about when they dont catch quality issues? Then the model is poor, or bad decisions are made. In either case, the business suffers.

For example, what if you have a single entity logged as Dallas-Fort Worth and DFW in a database? When you test a new feature, everyone in Dallas Fort-Worth is shown as variation A and everyone in DFW is shown variation B. No one catches the discrepancy. You cant conclude users in the Dallas Fort-Worth area your test has been thrown off, and the groups havent been properly randomized.

Clear the path for better experimentation and analysis through a foundation of higher quality data. By using your expertise to boost quality, your data will become more reliable, and your business teams can run meaningful tests. The team can focus on what to test next instead of doubting the results of the tests.

Confidence in the data starts with you; if you dont have a handle on high-quality and reliable data, youll carry that burden into your interactions with the product and your colleagues.

So stake your claim as the point-person for data quality and data ownership. You can have input into defining quality and delegating responsibility for fixing different issues. Remove friction between data science and engineering.

If you can lead the charge to define and boost data quality, youll impact almost every other team within your organization. Your teammates will appreciate the work you do to reduce org-wide headaches.

Incomplete or unreliable data can lead to terabytes of wasted data. That data lives in your warehouse, getting included in queries that incur compute costs. Low-quality data can be a major drag on your infrastructure bill as it gets included in the filtering-out process time and again.

Identifying complex data is one way to immediately create value for your organization, especially for pipelines that see heavy traffic for product analytics and machine learning. Recollect, reprocess, or impute and clean existing values to reduce storage and compute costs.

Keep track of the tables and data you clean up, and the number of queries run on those tables. Its essential to notify your team about how many questions are no longer running on junk data and how many gigs of storage are freed up for better things.

All data professionals, seasoned veterans, and newcomers should be indispensable parts of the organization. You add value by taking ownership of more reliable data. Although tools, algorithms, and analytics techniques are growing more sophisticated, often the input data is not its always unique and business-specific. Even the most sophisticated tools and models cant run well on erroneous data. The impact of data science can be a boon to your entire organization through the above five steps. Everyone wins when you improve the data your teams depend upon.

Which techniques can help data scientists and ML engineers streamline the data management process? Tell us on Facebook, Twitter, and LinkedIn. Wed love to know!

Read more:
5 Ways Data Scientists Can Advance Their Careers - Spiceworks News and Insights

Artificial intelligence and machine learning now integral to smart power solutions – Times of India

They help to improve efficiency and profitability for utilities.

The utilities space is rapidly transforming today. Its shifting from the conventional and a highly-regulated environment to a tech-driven market at a fast clip. Collating data and optimizing manpower is a constant struggle. The smarter optimization of infrastructure has increased monumentally with the outbreak of the pandemic, and also the dependency on technology. There is an urgent need to balance the supply and demand for which Artificial Intelligence (AI) and Machine Learning (ML) can come into play. Data Science, aided by AI and ML, has been leading to several positive developments in the utilities space. Digitalization can increase the profitability of utilities by significant percentages by utilizing smart meters for grids, digital productivity tools and automating back-office processes. According to a study firms can increase their profitability from 20 percent to 30 percent.

Digital measures rewire organizations to do better through a fundamental reboot of how work gets done.

Customer Service and AI

According to a Gartner report, most AI investments by utilities most often go into customer service solutions. Some 86% of the utilities studied used AI in their digital marketing, towards call center support and customer application. This is testimony to the investments in AI and ML that can deliver a high ROI by improving speed and efficiency, thus enhancing customer experience. The AI thats customer-facing is a low-risk investment as customer enquiries are often repetitive such as billing enquiries, payments, new connections etc. AI can deliver tangible results for business on the customer service front.

Automatic Meters for Energy conservation

As manual entry and billing systems are not only time-consuming, but also susceptible to errors and are expensive too. The Automatic Meter Reading (AMR) System has made a breakthrough. The AMR enables large infrastructure set ups to collect data easily and also analyze the cost centers and the opportunities for improving the efficiencies of natural gas, electric, water sectors and more. It offers real-time billing information for budgeting. It has the advantage of being precise compared to manual entry. Additionally, it is able to store data at distribution points within the networks of the utility. This can be easily accessed over a network using devices like the mobile and handhelds. Energy consumption can be tracked to aid conservation and end energy theft.

Predictive Analytics Enable Smart grid options

By leveraging new-age technologies, utilities can benefit immensely. These technologies in the energy sector help in building smart power grids. The energy sector heavily relies on a complex infrastructure that can face multiple issues as a result of maintenance issues, weather conditions, failure of the system or equipment, demand surges and misallocation of resources. Overloading and congestion leads to a lot of energy being wasted. The grids produce a humongous data which help with risk mitigation when properly utilized. With the large volume of data that continuously pass over the grid, it can be challenging to collect and aggregate it. The operators could miss these insights which could lead to malfunction or outages. With the help of the ML algorithms, the insights can be obtained for smooth functioning of the grids. Automated data management can help maintain the data accurately. With the help of predictive analytics, the operators can predict grid failures before the customers are affected and also create greater customer satisfaction and mitigate any financial loss.

Efficient and Sustainable energy consumption

These allow for better allocation of energy for consumption as it would be based on demand and can save resources and help in load management and forecasting. AI can also deal with issues pertaining to vegetation by analyzing operational data or statistics. This can help to proactively deal with wildfires. Thus, it can become a sustainable and efficient system. To overcome issues pertaining to weather-related maintenance, automation helps receive signals and prioritize the areas that need attention to save money and cut down the downtime. To achieve this, the sector adopts ML capabilities as they need to be able to access automation fast and easily.

The construction sector is also a major beneficiary of the solutions. Building codes and architecture are often a humongous challenges that take a long time to meet. But, some solutions help the builders and developers test these applications seamlessly without any system interruptions. By integrating AI and ML in the data management platforms, the developers enable the data-science teams to spend enough time innovating and much less time on maintenance. With the rise in the computational power and accessibility to the Cloud, the deep learning algorithms are able to train faster while their cost is optimized. AI and ML are able to impact different aspects of business. AI can enhance the quality of human jobs by facilitating remote working. They can help in data collection and analysis and also provide actionable inputs. Data analytics platforms can throw light on the areas of inefficiency and help the providers keep costs down.

Though digital transformation might appear intimidating, its opportunities are much more than the cost and risk associated. Gradually, all utilities will undergo digital transformation as it has begun to take roots in the industrial sectors. This AI-led transformation will improve productivity, revenue gains, make networks more reliable and safe, accelerate customer acquisition, and facilitate entry into new areas of business. Globally, the digital utility market is growing at a CAGR of 11.7% for the period of 2019 to 2027. In 2018, the revenue generated globally for the digital utility market was 141.41 Bn and is expected to reach US$ 381.38 Bn by 2027 according to a study by ResearchAndMarkets.com. As the sector evolves, the advantages of AI and ML will come into play and lead to smarter grids, efficient operations and higher customer satisfaction. The companies that are in a position to take advantage of this opportunity will be ready for the future challenges that could emerge in the market.

Views expressed above are the author's own.

END OF ARTICLE

Read more from the original source:
Artificial intelligence and machine learning now integral to smart power solutions - Times of India

Man wins competition with AI-generated artwork and some people aren’t happy – The Register

In brief A man won an art competition with an AI-generated image crafted, and some people aren't best pleased about it.

The image, titled Thtre D'opra Spatial, looks like an impressive painting of an opera scene with performers on stage, and an abstract audience in the background with a huge moon-like window of some sort. It was created by Jason Allen, who went through hundreds of iterations of written descriptions fed into text-to-image generator Midjourey before the software emitted the picture he wanted.

He won first prize, and $300, after he submitted a printed version of the image to the Colorado State Fair's fine art competition. His achievement, however, has raised eyebrows and divided opinions.

"I knew this would be controversial," Allen said in the Midjourney Discord server on Tuesday, according to Vice. "How interesting is it to see how all these people on Twitter who are against AI generated art are the first ones to throw the human under the bus by discrediting the human element! Does this seem hypocritical to you guys?"

Washington Post tech reporter Drew Harwell, who covered the brouhaha here, raised an interesting point: "People once saw photography as cheating, too just pushing a button and now we realize the best creations rely on skilled composition, judgment, and tone," he tweeted.

"Will we one day regard AI art in the same way?"

DeepMind has trained virtual agents to play football the soccer kind using reinforcement learning to control their motor and team work skills.

Football is a fine game to test software's planning skills in a physical domain as it requires bots to learn how to move and coordinate their computer body parts alongside others to achieve a goal. These capabilities will prove useful in the future for real robots and will be a necessary part of artificial general intelligence.

"Football is a great domain to explore this very general problem," DeepMind researchers and co-authors of a paper published in Science Robotics this week told The Register. "It requires planning at the level of skills such as tackling, dribbling or passing, but also longer-term concerns such as clearing the ball or positioning.

"Humans can do this without actively thinking at the level of high frequency motor control or individual muscle movements. We don't know how planning is best organized at such different scales, and achieving this with AI is an active open problem for research."

At first, the humanoids move their limbs in a virtual environment randomly and gradually learn to run, tackle, and score using imitation and reinforcement learning over time.

They were pitted against each other in teams of two. You can see a demonstration in the video below.

Youtube Video

It was only a matter of time before someone went and built a viral text-to-image tool to generate pornographic images.

Stable Diffusion is taking the AI world by storm. The software including the source code, model and its weights has been released publicly, allowing anyone with some level of coding skills to tailor their own system to a specific use case. One developer has built and released Porn Pen to the world, with which users can choose a series of tags, like "babe" or "chubby," to generate a NSFW image.

"I think it's somewhat inevitable that this would come to exist when [OpenAI's] DALL-E did," Os Keyes, a PhD candidate at Seattle University, told TechCrunch. "But it's still depressing how both the options and defaults replicate a very heteronormative and male gaze."

It's unclear how this will affect the sex industry, and many are concerned text-to-image tools could be driven to create deepfakes of someone or pushed to produce illegal content. These systems have sometimes struggled to visualize human anatomy correctly.

People have noticed these ML models adding nipples on random parts of the body or sometimes an extra arm or something is poking out somewhere. All of this is rather creepy.

There's a mobile app that claims it can translate the meaning of a cat's meows into plain English using machine-learning algorithms.

Aptly named MeowTalk, the app analyses recordings of cat noises to predict their mood and interprets what they might be trying to say. It tells owners if their pet felines are happy, resting, or hunting, and may translate this into phrases such as "let me rest" or "hey, I'm so happy to see you," for example.

"We're trying to understand what cats are saying and give them a voice" Javier Sanchez, a founder of MeowTalk, told the New York Times. "We want to use this to help people build better and stronger relationships with their cats," he added. Code using machine learning algorithms to decode and study animal communication, however, isn't always reliable.

MeowTalk doesn't interpret the intent of purring very well, and sometimes the text translation of cat noises are very odd. When a reporter picked up her cat and she meowed, the app apparently thought she told her owner: "Hey baby, let's go somewhere private!"

Stavros Ntalampiras, a computer scientist at the University of Milan, who was called to help the MeowTalk founders, admitted that "a lot of translations are kind of creatively presented to the user," and said "it's not pure science at this stage."

Visit link:
Man wins competition with AI-generated artwork and some people aren't happy - The Register

All You Need to Know About Support Vector Machines – Spiceworks News and Insights

A support vector machine (SVM) is defined as a machine learning algorithm that uses supervised learning models to solve complex classification, regression, and outlier detection problems by performing optimal data transformations that determine boundaries between data points based on predefined classes, labels, or outputs. This article explains the fundamentals of SVMs, their working, types, and a few real-world examples.

A support vector machine (SVM) is a machine learning algorithm that uses supervised learning models to solve complex classification, regression, and outlier detection problems by performing optimal data transformations that determine boundaries between data points based on predefined classes, labels, or outputs. SVMs are widely adopted across disciplines such as healthcare, natural language processing, signal processing applications, and speech & image recognition fields.

Technically, the primary objective of the SVM algorithm is to identify a hyperplane that distinguishably segregates the data points of different classes. The hyperplane is localized in such a manner that the largest margin separates the classes under consideration.

The support vector representation is shown in the figure below:

As seen in the above figure, the margin refers to the maximum width of the slice that runs parallel to the hyperplane without any internal support vectors. Such hyperplanes are easier to define for linearly separable problems; however, for real-life problems or scenarios, the SVM algorithm tries to maximize the margin between the support vectors, thereby giving rise to incorrect classifications for smaller sections of data points.

SVMs are potentially designed for binary classification problems. However, with the rise in computationally intensive multiclass problems, several binary classifiers are constructed and combined to formulate SVMs that can implement such multiclass classifications through binary means.

In the mathematical context, an SVM refers to a set of ML algorithms that use kernel methods to transform data features by employing kernel functions. Kernel functions rely on the process of mapping complex datasets to higher dimensions in a manner that makes data point separation easier. The function simplifies the data boundaries for non-linear problems by adding higher dimensions to map complex data points.

While introducing additional dimensions, the data is not entirely transformed as it can act as a computationally taxing process. This technique is usually referred to as the kernel trick, wherein data transformation into higher dimensions is achieved efficiently and inexpensively.

The idea behind the SVM algorithm was first captured in 1963 by Vladimir N. Vapnik and Alexey Ya. Chervonenkis. Since then, SVMs have gained enough popularity as they have continued to have wide-scale implications across several areas, including the protein sorting process, text categorization, facial recognition, autonomous cars, robotic systems, and so on.

See More: What Is a Neural Network? Definition, Working, Types, and Applications in 2022

The working of a support vector machine can be better understood through an example. Lets assume we have red and black labels with the features denoted by x and y. We intend to have a classifier for these tags that classifies data into either the red or black category.

Lets plot the labeled data on an x-y plane, as below:

A typical SVM separates these data points into red and black tags using the hyperplane, which is a two-dimensional line in this case. The hyperplane denotes the decision boundary line, wherein data points fall under the red or black category.

A hyperplane is defined as a line that tends to widen the margins between the two closest tags or labels (red and black). The distance of the hyperplane to the most immediate label is the largest, making the data classification easier.

The above scenario is applicable for linearly separable data. However, for non-linear data, a simple straight line cannot separate the distinct data points.

Heres an example of non-linear complex dataset data:

The above dataset reveals that a single hyperplane is not sufficient to separate the involved labels or tags. However, here, the vectors are visibly distinct, making segregating them easier.

For data classification, you need to add another dimension to the feature space. For linear data discussed until this point, two dimensions of x and y were sufficient. In this case, we add a z-dimension to better classify the data points. Moreover, for convenience, lets use the equation for a circle, z = x + y.

With the third dimension, the slice of feature space along the z-direction looks like this:

Now, with three dimensions, the hyperplane, in this case, runs parallel to the x-direction at a particular value of z; lets consider it as z=1.

The remaining data points are further mapped back to two dimensions.

The above figure reveals the boundary for data points along features x, y, and z along a circle of the circumference with radii of 1 unit that segregates two labels of tags via the SVM.

Lets consider another method of visualizing data points in three dimensions for separating two tags (two different colored tennis balls in this case). Consider the balls lying on a 2D plane surface. Now, if we lift the surface upward, all the tennis balls are distributed in the air. The two differently colored balls may separate in the air at one point in this process. While this occurs, you can use or place the surface between two segregated sets of balls.

In this entire process, the act of lifting the 2D surface refers to the event of mapping data into higher dimensions, which is technically referred to as kernelling, as mentioned earlier. In this way, complex data points can be separated with the help of more dimensions. The concept highlighted here is that the data points continue to get mapped into higher dimensions until a hyperplane is identified that shows a clear separation between the data points.

The figure below gives the 3D visualization of the above use case:

See More: Narrow AI vs. General AI vs. Super AI: Key Comparisons

Support vector machines are broadly classified into two types: simple or linear SVM and kernel or non-linear SVM.

A linear SVM refers to the SVM type used for classifying linearly separable data. This implies that when a dataset can be segregated into categories or classes with the help of a single straight line, it is termed a linear SVM, and the data is referred to as linearly distinct or separable. Moreover, the classifier that classifies such data is termed a linear SVM classifier.

A simple SVM is typically used to address classification and regression analysis problems.

Non-linear data that cannot be segregated into distinct categories with the help of a straight line is classified using a kernel or non-linear SVM. Here, the classifier is referred to as a non-linear classifier. The classification can be performed with a non-linear data type by adding features into higher dimensions rather than relying on 2D space. Here, the newly added features fit a hyperplane that helps easily separate classes or categories.

Kernel SVMs are typically used to handle optimization problems that have multiple variables.

See More: What is Sentiment Analysis? Definition, Tools, and Applications

SVMs rely on supervised learning methods to classify unknown data into known categories. These find applications in diverse fields.

Here, well look at some of the top real-world examples of SVMs:

The geo-sounding problem is one of the widespread use cases for SVMs, wherein the process is employed to track the planets layered structure. This entails solving the inversion problems where the observations or results of the issues are used to factor in the variables or parameters that produced them.

In the process, linear function and support vector algorithmic models separate the electromagnetic data. Moreover, linear programming practices are employed while developing the supervised models in this case. As the problem size is considerably small, the dimension size is inevitably tiny, which accounts for mapping the planets structure.

Soil liquefaction is a significant concern when events such as earthquakes occur. Assessing its potential is crucial while designing any civil infrastructure. SVMs play a key role in determining the occurrence and non-occurrence of such liquefaction aspects. Technically, SVMs handle two tests: SPT (Standard Penetration Test) and CPT (Cone Penetration Test), which use field data to adjudicate the seismic status.

Moreover, SVMs are used to develop models that involve multiple variables, such as soil factors and liquefaction parameters, to determine the ground surface strength. It is believed that SVMs achieve an accuracy of close to 96-97% for such applications.

Protein remote homology is a field of computational biology where proteins are categorized into structural and functional parameters depending on the sequence of amino acids when sequence identification is seemingly difficult. SVMs play a key role in remote homology, with kernel functions determining the commonalities between protein sequences.

Thus, SVMs play a defining role in computational biology.

SVMs are known to solve complex mathematical problems. However, smooth SVMs are preferred for data classification purposes, wherein smoothing techniques that reduce the data outliers and make the pattern identifiable are used.

Thus, for optimization problems, smooth SVMs use algorithms such as the Newton-Armijo algorithm to handle larger datasets that conventional SVMs cannot. Smooth SVM types typically explore math properties such as strong convexity for more straightforward data classification, even with non-linear data.

SVMs classify facial structures vs. non-facial ones. The training data uses two classes of face entity (denoted by +1) and non-face entity (denoted as -1) and n*n pixels to distinguish between face and non-face structures. Further, each pixel is analyzed, and the features from each one are extracted that denote face and non-face characters. Finally, the process creates a square decision boundary around facial structures based on pixel intensity and classifies the resultant images.

Moreover, SVMs are also used for facial expression classification, which includes expressions denoted as happy, sad, angry, surprised, and so on.

In the current scenario, SVMs are used for the classification of images of surfaces. Implying that the images clicked of surfaces can be fed into SVMs to determine the texture of surfaces in those images and classify them as smooth or gritty surfaces.

Text categorization refers to classifying data into predefined categories. For example, news articles contain politics, business, the stock market, or sports. Similarly, one can segregate emails into spam, non-spam, junk, and others.

Technically, each article or document is assigned a score, which is then compared to a predefined threshold value. The article is classified into its respective category depending on the evaluated score.

For handwriting recognition examples, the dataset containing passages that different individuals write is supplied to SVMs. Typically, SVM classifiers are trained with sample data initially and are later used to classify handwriting based on score values. Subsequently, SVMs are also used to segregate writings by humans and computers.

In speech recognition examples, words from speeches are individually picked and separated. Further, for each word, certain features and characteristics are extracted. Feature extraction techniques include Mel Frequency Cepstral Coefficients (MFCC), Linear Prediction Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC), and others.

These methods collect audio data, feed it to SVMs and then train the models for speech recognition.

With SVMs, you can determine whether any digital image is tampered with, contaminated, or pure. Such examples are helpful when handling security-related matters for organizations or government agencies, as it is easier to encrypt and embed data as a watermark in high-resolution images.

Such images contain more pixels; hence, it can be challenging to spot hidden or watermarked messages. However, one solution is to separate each pixel and store data in different datasets that SVMs can later analyze.

Medical professionals, researchers, and scientists worldwide have been toiling hard to find a solution that can effectively detect cancer in its early stages. Today, several AI and ML tools are being deployed for the same. For example, in January 2020, Google developed an AI tool that helps in early breast cancer detection and reduces false positives and negatives.

In such examples, SVMs can be employed, wherein cancerous images can be supplied as input. SVM algorithms can analyze them, train the models, and eventually categorize the images that reveal malign or benign cancer features.

See More: What Is a Decision Tree? Algorithms, Template, Examples, and Best Practices

SVMs are crucial while developing applications that involve the implementation of predictive models. SVMs are easy to comprehend and deploy. They offer a sophisticated machine learning algorithm to process linear and non-linear data through kernels.

SVMs find applications in every domain and real-life scenarios where data is handled by adding higher dimensional spaces. This entails considering factors such as the tuning hyper-parameters, selecting the kernel for execution, and investing time and resources in the training phase, which help develop the supervised learning models.

Did this article help you understand the concept of support vector machines? Comment below or let us know on Facebook, Twitter, or LinkedIn. Wed love to hear from you!

Follow this link:
All You Need to Know About Support Vector Machines - Spiceworks News and Insights

Machine and deep learning are a MUST at the North-West… – Daily Maverick

The last century alone has seen a meteoric increase in the accumulation of data and we are able to store unfathomable quantities of information to help us solve problems known and unknown. At some point the ability to optimally utilise these vast amounts of data will be beyond our reach, but not beyond that of the tools we have made. At the North-West University (NWU), Professor Marelie Davel, director of the research group MUST Deep Learning, and her team are ensuring that our ever-growing data repositories will continue to benefit society.

The teams focus on machine learning and, specifically, deep learning, is creating magic to the untrained eye. Here is why.

Machine learning is a catch-all term for systems that learn in an automated way from their environment. These systems are not programmed with the steps to solve a specific task, but they are programmed to know how to learn from data. In the process, the system uncovers the underlying patterns in the data and comes up with its own steps to solve the specific task, explains Professor Davel.

According to her, machine learning is becoming increasingly important as more and more practical tasks are being solved by machine learning systems: From weather prediction to drug discovery to self-driving cars. Behind the scenes we see that many of the institutions we interact with, like banks, supermarket chains and hospitals, all nowadays incorporate machine learning in aspects of their business. Machine learning makes everyday tools from internet searches to every smartphone photo we take work better.

The NWU and MUST go a step beyond this by doing research on deep learning. This is a field of machine learning that was originally inspired by the idea of artificial neural networks, which were simple models of how neurons were thought to interact in the human brain. This was conceived in the early forties! Modern networks have come a long way since then, with increasingly complex architectures creating large, layered models that are particularly effective at solving human-like tasks, such as processing speech and language, or identifying what is happening in images.

She explains that, although these models are very well utilised, there are still surprisingly many open questions about how they work and when they fail.

We work on some of these open questions, specifically on how the networks perform when they are presented with novel situations that did not form part of their training environment. We are also studying the reasons behind the decisions the networks make. This is important in order to determine whether the steps these models use to solve tasks are indeed fair and unbiased, and sometimes it can help to uncover new knowledge about the world around us. An example is identifying new ways to diagnose and understand a disease.

The uses of this technology are nearly boundless and will continue to grow, and that is why Professor Davel encourages up-and-coming researchers to consider focusing their expertise in this field.

By looking inside these tools, we aim to be better users of the tools as well. We typically apply the tools with industry partners, rather than on our own. Speech processing for call centres, traffic prediction, art authentication, space weather prediction, even airfoil design. We have worked in quite diverse fields, but all applications build on the availability of large, complex data sets that we then carefully model. This is a very fast-moving field internationally. There really is a digital revolution that is sweeping across every industry one can think of, and machine learning is a critical part of it. The combination of practical importance and technical challenge makes this an extremely satisfying field to work in.

She confesses that, while some of the ideas of MUSTs collaborators may sound far-fetched at first, the team has repeatedly found that if the data is there, it is possible to build a tool to use it.

One can envision a future where human tasks such as speech recognition and interaction have been so well mimicked by these machines, that they are indistinguishable from their human counterparts. The famed science fiction writer Arthur C Clarke once remarked that any sufficiently advanced technology is indistinguishable from magic. At the NWU, MUST is doing their part in bringing this magic to life. DM

Author: Bertie Jacobs

Read more:
Machine and deep learning are a MUST at the North-West... - Daily Maverick

Stable Diffusion Goes Public and the Internet Freaks Out – DevOps.com

Welcome to The Long Viewwhere we peruse the news of the week and strip it to the essentials. Lets work out what really matters.

Unless youve been living under a rock for the past week, youll have seen something about Stable Diffusion. Its the new open source machine learning model for creating images from text and even other images.

Like DALL-E and Midjourney, you give it a textual prompt and it generates amazing images (or sometimes utter garbage). Unlike those other models, its open source, so were already seeing an explosion of innovation.

Mark Hachman calls it The new killer app

Fine-tune your algorithmic artAI art is fascinating. Enter a prompt, and the algorithm will generate an image to your specifications. Generally, this all takes place on the Web, with algorithms like DALL-E. [But] Stability.Ai and its Stable Diffusion model broke that moldwith a model that is publicly available and can run on consumer GPUs.For now, Stability.Ai recommends that you have a GPU with at least 6.9GB of video RAM. Unfortunately, only Nvidia GPUs are currently supported. [But] if you own a powerful PC, you can take all the time youd like to fine-tune your algorithmic art and come up with something truly impressive.

From the horses mouth, its Emad Mostaque: Stable Diffusion Public Release

Use this in an ethical, moral and legal mannerIt is our pleasure to announce the public release of stable diffusion. Over the last few weeks we all have been overwhelmed by the response and have been working hard to ensure a safe and ethical release, incorporating data from our beta model tests and community for the developers to act on.As these models were trained on image-text pairs from a broad internet scrape, the model may reproduce some societal biases and produce unsafe content, so open mitigation strategies as well as an open discussion about those biases can bring everyone to this conversation. We hope everyone will use this in an ethical, moral and legal manner and contribute both to the community and discourse around it.

Yeah, right. Have you ever been on the Internet? Kyle Wiggers sounds worried: Deepfakes for all

90% are of womenStable Diffusionis now in use by art generator services like Artbreeder, Pixelz.ai and more. But the models unfiltered nature means not all the use has been completely above board.Other AI art-generating systems, like OpenAIs DALL-E 2, have implemented strict filters for pornographic material. Moreover, many dont have the ability to create art of public figures. Women, unfortunately, are most likely by far to be the victims of this. A study carried out in 2019 revealed that, of the 90% to 95% of deepfakes that are non-consensual, about 90% are of women.

Why is it such a big deal? Just ask Simon Willison:

Science fiction is realStable Diffusion is a really big deal. If you havent been paying attention to whats going onyou really should be. Its similar to models like Open AIs DALL-E, but with one crucial difference: they released the whole thing.In just a few days, there has been an explosion of innovation around it. The things people are building are absolutely astonishing. Generating images from text is one thing, but generating images from other images is a whole new ballgame. Imagine having an on-demand concept artist that can generate anything you can imagine, and can iterate with you towards your ideal result.Science fiction is real now. Machine learning generative models are here, and the rate with which they are improving is unreal. Its worth paying real attention to.

How does it compare to the DALL-E? Just ask Beyondo:

Personally, stable diffusion is better. OpenAI makes it sounds like they created the holy grail of image generation models but their images dont impress anyone who used stable diffusion.

@fabianstelzer did a bunch of comparative tests:

These image synths are like instruments its amazing well get so many of them, each with a unique sound. DALL-Es really great for facial expressions. [Midjourney] wipes the floor with the others when it comes toprompts aiming for textural details. DALL-Es usually my go to for scenes involving 2 or more clear actors. DALL-E and SD being better at photosStable Diffusion can do incredible photosbut you need to be careful to not overload the scene.The moment you put art into a prompt, Midjourney just goes nuts. DALL-Es imperfections look very digital, unlike MJs. When it comes to copying specific styles, SD is absolutely [but] DALL-E wont let you do a Botticelli painting of Trump.

And what of the training data? Heres Andy Baio:

One of the biggest frustrations of text-to-image generation AI models is that they feel like a black box. We know they were trained on images pulled from the web, but which ones? The team behind Stable Diffusion have been very transparent about how their model is trained. Since it was released publicly last week, Stable Diffusion has exploded in popularity, in large part because of its free and permissive licensing.Simon Willison [and I] grabbed the data for over 12 million images used to train Stable Diffusion. [It] was trained off three massive datasets collected by LAION. All of LAIONs image datasets are built off of Common Crawl, [which] scrapes billions of webpages monthly and releases them as massive datasets. Nearly half of the images, about 47%, were sourced from only 100 domains, with the largest number of images coming from Pinterest. WordPress-hosted blogs on wp.com and wordpress.com represented6.8% of all images. Other photo, art, and blogging sites includedSmugmugBlogspotFlickrDeviantArtWikimedia500px, andTumblr.

Meanwhile, how does it work? Letitia Parcalabescu is easy for her to say:

How do Latent Diffusion Models work? If you want answers to these questions, weve got you covered!

You have been readingThe Long ViewbyRichiJennings. You can contact him at@RiCHior[emailprotected].

Image: Stable Diffusion, via Andy Baio (Creative ML OpenRAIL-M; leveled and cropped)

See the original post here:
Stable Diffusion Goes Public and the Internet Freaks Out - DevOps.com

Senior Lecturer / Associate Professor in Fairness in Machine Learning and AI Planning job with UNIVERSITY OF MELBOURNE | 307051 – Times Higher…

Location:ParkvilleRole type:Full time;ContinuingFaculty:Faculty of Engineering and Information TechnologyDepartment/School:School of Computing and Information SystemsSalary:Level C $135,032 $155,698 orLevel D$162,590 $179,123p.a. plus 17% super

The University of Melbourne would like to acknowledge and pay respect to the Traditional Owners of the lands upon which our campuses are situated, the Wurundjeri and Boon Wurrung Peoples, the Yorta Yorta Nation, the Dja Dja Wurrung People. We acknowledge that the land on which we meet and learn was the place of age-old ceremonies, of celebration, initiation and renewal, and that the local Aboriginal Peoples have had and continue to have a unique role in the life of these lands.

About the School of Computing and Information Systems (CIS)

We are international research leaders with a focus on delivering impact and making a real difference in three key areas: data and knowledge, platforms and systems, and people and organisations.

At the School of Computing and Information Systems, you'll find curious people, big problems, and plenty of chances to create a real difference in the world.

To find out more about CIS, visit:http://www.cis.unimelb.edu.au/

About the Role

The Faculty of Engineering and Information Technology (FEIT) is seeking an aspiring academic leader with expertise in algorithms and their fairness in machine learning and/or AI (artificial intelligence) planning, or related fields, for a substantive position within the School of Computing and Information Systems (CIS).

You will join a world-class computer science research group, which has strong links to the Centre for AI & Digital Ethics (CAIDE) and will be expected to collaborate with both alongside other internationally respected groups across artificial intelligence, human-computer interaction, information systems.

You highly ambitious and eager to demonstrate world leading research through publications in key conferences (typified by, but not limited to, FAccT, The Web Conference, KDD, NeurIPS, ICAPS, AAAI, IJACI, ITCS, EC, CHI, CSCW) and in high-quality journals (typified by, but not limited to, ACM TKDD, AIJ, ACM Transactions on Economics and Computation, Proceedings of the National Academy of Sciences, Big Data and Society, AI and Society, AI and Ethics, TCS). You will make a valuable contribution to the School and broader academic community through mentorship, contributions to teaching into various Masters programs related to algorithms, theory, digital ethics and related areas and provide critical leadership in engagement activities including securing grant funding to support your program of research.

This is an exciting opportunity to further develop your academic and leadership profile and be supported to achieve your goals across all pillars of an academic career.

Responsibilities include:

About You

You are an aspiring leader with the ability to build a highly respected reputation in Machine Learning and/or AI Planning, as demonstrated through a significant track record of publications in high-impact peer-reviewed and refereed venues, and invitations to speak at national and international meetings. You are experienced in mentoring both students, colleagues and research teams and demonstrate great initiative in the establishment and nurturing of research projects. Your highly-developed communication and relationship building skills enable you to engage with a diverse range of people and institutions to develop partnerships that positively contribute to strategic initiatives.

You will also have:

For full details of responsibilities and selection criteria, including criteria for a Level D appointment, please refer to the attached position description.

To ensure the University continues to provide a safe environment for everyone, this position requires the incumbent to hold a current and valid Working with Children Check.

About - The Faculty of Engineering and Information Technology (FEIT)

The Faculty of Engineering and Information Technology (FEIT) has been the leading Australian provider of engineering and IT education and research for over 150 years. We are a multidisciplinary School organised into three key areas; Computing and Information Systems (CIS), Chemical and Biomedical Engineering (CBE) and Electrical, Mechanical and Infrastructure Engineering (EMI). FEIT continues to attract top staff and students with a global reputation and has a commitment to knowledge for the betterment of society.

https://eng.unimelb.edu.au/about/join-feit

About the University

The University of Melbourne is consistently ranked amongst the leading universities in the world. We are proud of our people, our commitment to research and teaching excellence, and our global engagement.

Benefits of Working with Us

In addition to having the opportunity to grow and be challenged, and to be part of a vibrant campus life, our people enjoy a range of rewarding benefits:

To find out more, visithttps://about.unimelb.edu.au/careers/staff-benefits.

Be Yourself

We value the unique backgrounds, experiences and contributions that each person brings to our community and encourage and celebrate diversity. First Nations people, those identifying as LGBTQIA+, females, people of all ages, with disabilities and culturally and linguistically diverse people are encouraged to apply. Our aim is to create a workforce that reflects the community in which we live.

Join Us!

If you feel this role is right for you, please apply with your CV and cover letter outlining your interest and experience. Please note that you are not required to provide responses against the selection criteria in the Position Description.

We are dedicated to ensuring barrier free and inclusive practices to recruit the most talented candidates. If you require any reasonable adjustments with the recruitment process, please contact us athr-talent@unimelb.edu.au.

Position Description:0054173_PD_C D in Fairness.pdf

Applications close:Monday 26 September 2022 11:55 PMAUS Eastern Standard Time

Go here to see the original:
Senior Lecturer / Associate Professor in Fairness in Machine Learning and AI Planning job with UNIVERSITY OF MELBOURNE | 307051 - Times Higher...

Model-Agnostic Interpretation: Beyond SHAP and LIME – Geektime

Since machine learning models are statistical models, they naturally leave themselves open to potential errors. For example, Apple cards fair lending fiasco brought into question the inherent discrimination in loan approval algorithms while a project funded by the UK government that used AI to predict gun and knife crime turned out to be wildly inaccurate.

For people to trust machine learning models, we need explanations. It makes sense for a loan to be rejected due to low income, but if a loan gets rejected based on an applicant's zip code, this might indicate theres bias in the model, i.e, it can favour more wealthy areas

When choosing a machine learning algorithm, theres usually a tradeoff between the algorithms interpretability and its accuracy. Traditional methods like decision trees and linear regression can be directly explained, but their ability to provide accurate predictions is limited. More modern methods such as Random Forests and Neural Networks give better predictions but are more difficult to interpret.

In the last few years, we've seen great advances in the interpretation of machine learning models with methods like Lime and SHAP. While these methods do require some background, analyzing the underlying data can offer a simple and intuitive interpretation. For this, we first need to understand how humans reason.

Lets think about the common example of the roosters crow: If you grew up in the countryside, you might know that roosters always crow before the sun rises. Can we infer that the roosters crow makes the sun rise? Its clear that the answer is no. But, why?

Humans have a mental model of reality. We know that if the rooster doesn't crow, the sun rises anyway. This type of reasoning is called counterfactual.

This is the common way in which people make sense of reality. Counterfactual reasoning cannot be scientifically proven. Descartes demon, or the idea of methodological skepticism, illustrates this idea: According to this concept, if Event B happens right after Event A, you can never be sure that there isnt some demon that causes B to happen right after A. The scientific field historically refrained from formalizing any discussion on causality. But, more recently, efforts have been made to create a scientific language that helps us better understand cause and effect. For additional information, be sure to read The Book of Why by Judea Pearl, a prominent computer science researcher and philosopher.

At my company, we have predictive models aimed at an assessment of customers' risk when they apply for a loan. The model uses historical data in a tabular format, in which each customer has a list of meaningful features like payment history, income and incorporation date. Using this data, we predict the customet's level of risk and divide it into six different risk groups (or buckets). We interpret the model's predictions using both local and global explanations, then we use counterfactual analysis to explain our predictions to the business stakeholders.

Local explanations are aimed to explain a single prediction. We replace each features value with the median in the representative population and display the feature that caused the largest change in score through text. In the following example, the third feature is successful repayments, and its median is 0. We calculate new predictions while replacing the original features value with the new value (the median).

Customer_1 had their prediction changed to a reduced risk, and we can devise a short explanation. A higher number of successful repayments improved the customers risk level. Or in its more detailed version: The customer had 3 successful repayments compared to a median of 0 in the population. This caused the risk level to improve from level D to E.

Global explanations are aimed to explain the features direction in the model as a whole. An individual feature value is replaced with one extreme value. For example, this value can be the 95th percentile - i.e., almost the largest value in the sample (95% of the values are smaller than it).

The changes in the scores distribution are calculated and visualized in the chart below. The figure shows the change in the customer's risk level when increasing the value to the 95th percentile.

When increasing the first listed feature (length of delay in payments) to the 95th percentile, a large portion of the customers have their risk level deteriorate one or more levels. A person who reviews this behaviour can easily accept that a delay in payments is expected to cause a worse risk level.

The second feature, monthly balance increase, has a combined effect - a small percentage of the customer's have their risk level deteriorate, while a larger percentage have their risk level improve. This combined effect might indicate theres some interaction between features, although that is not something that can be directly explained through this method.

The third feature, years since incorporation, has a positive effect on the customer's risk level when increasing it to the 95% percentile. Here too, it can be easy to accept that businesses that have been around for longer periods are likely to be more stable and therefore present less risk.

Unlike many other reasoning methods, the counterfactual approach allows for simple and intuitive data explanations that anyone can understand, which can increase the trust we have in machine learning models.

Written by Nathalie Hauser, Manager, Data Science at Bluevine

The rest is here:
Model-Agnostic Interpretation: Beyond SHAP and LIME - Geektime

The U.S., China, and Europe are ramping up a quantum computing arms race. Heres what theyll need to do to win – Fortune

Every country is vying to get a head start in the race to the worlds quantum future. A year ago, the United States, the United Kingdom, and Australia teamed up todevelopmilitary applications of digital technologies, especially quantum computing technologies. That followed the passage in 2019 of the National Quantum Initiative Act by the U.S. Congress, which laid out the countrys plans to rapidly create quantum computing capabilities.

Earlier, Europe launched a $1 billion quantum computing research project, Quantum Flagship, in 2016, and its member states have started building a quantum communications infrastructure that will be operational by 2027. In like vein, Chinas 14th Five Year Plan (2021-2025) prioritizes the development of quantum computing and communications by 2030. In all, between 2019 and 2021 China invested as much as $11 billion, Europe had spent $5 billion, the U.S. $3 billion, and the U.K. around $1.8 billion between to become tomorrows quantum superpowers.

As the scientific development of quantum technologies gathers momentum, creating quantum computers has turned into apriority for nations that wish to gain the next competitive advantage in the Digital Age. Theyre seeking this edge for two very different reasons. On the one hand,quantum technologies will likely transform almost every industry, from automotive and aerospace to finance and pharmaceuticals. These systems could create fresh value of between $450 billion and $850 billion over the next 15 to 30 years, according to recentBCG estimates.

On the other hand, quantum computing systems will pose a significant threat to cybersecurity the world over, as we argued in an earliercolumn.Hackers will be able to use them to decipher the public keys generated by the RSA cryptosystem, and to break through the security of any conventionally-encrypted device, system, or network. It will pose a potent cyber-threat, popularly called Y2Q (Years to Quantum), toindividuals and institutions as well as corporations and country governments. The latter have no choice but to tacklethe unprecedented challenge by developing countermeasures such as post-quantum cryptography, which will itself require the use of quantum systems.

Countries have learned the hard way since the Industrial Revolution that general-purpose technologies, such as quantum computing, are critical for competitiveness. Consider, for instance, semiconductor manufacturing, which the U.S., China, South Korea, and Taiwan have dominated in recent times. When the COVID-19 pandemic and other factors led to a sudden fall in production over the last two years, it resulted in production stoppages andprice increases in over 150 industries, including automobiles, computers, and telecommunications hardware. Many countries, among the members of theEuropean Union, Brazil, India, Turkey, and even the U.S., were hit hard, and are now trying to rebuild their semiconductorsupply chains. Similarly,China manufacturesmost of the worlds electric batteries, with the U.S. contributingonly about 7% of global output. Thats why the U.S. has recently announcedfinancial incentivesto induce business to create more electric battery-manufacturing capacity at home.

Much worse could be in store if countries and companies dont focus on increasing their quantum sovereignty right away. Because the development and deploymentof such systems requires the efforts of the public and private sectors, its important for governments to compare their efforts on both fronts with those of other countries.

The U.S. is expected to be the global frontrunnerin quantum computing, relying on its tech giants, such as IBM and Google, to invent quantum systems as well as numerous start-ups that are developing software applications. The latter attract almost 50% of the investments in quantum computing by venture capital and private equity funds, according toBCG estimates. Although the U.S. government has allocated only $1.1 billion, it has created mechanisms that effectively coordinate the efforts of all its agencies such as the NIST, DARPA, NASA, and NQI.

Breathing down the U.S.s neck: China, whose government has spent more on developing quantum systems than any other. . Those investments have boosted academic research, with China producing over 10% of the worlds research in 2021, according toour estimatessecond only to the U.S. The spillover effects are evident: Less than a year after Googles quantum machine had solved in minutes a calculation that would have taken supercomputers thousands of years to unravel, the University of Science and Technology of China (USTC) had cracked a problem three times tougher. As of September 2021, China hadnt spawned as many startups as the U.S., but it was relying on its digital giants such as Alibaba, Baidu, and Tencent to develop quantum applications.

Trailing only the U.S. and China, the European Unionsquantum computing efforts are driven by its member states as well as the union. The EUsQuantum Flagshipprogram coordinates research projects across the continent, but those efforts arent entirely aligned yet. Several important efforts, such as those ofFranceandGermany,run the risk of duplication or dont exploit synergies adequately. While the EU has spawned several startups that are working on different levels of the technology stacksuch as FinlandsIQM and FrancesPasqalmany seem unlikely to scale because of the shortage of late-stage funding. In fact, the EUs startups have attracted only about one-seventh as much funding as their American peers,according toBCG estimates.

Finally, the U.K. was one of the firstcountries in the world to launch a government-funded quantum computing program. Its counting on itseducational policiesand universities;scholarships for postgraduate degrees; and centers for doctoral training to get ahead. Like the EU, the U.K. also has spawned promising start-ups such asOrca,which announced the worlds smallest quantum computer last year. However, British start-ups may not be able to find sufficient capital to scale, and many are likely to be acquired by the U.S.s digital giants.

Other countries, such as Australia, Canada, Israel, Japan, and Russia are also in the quantum computing race, and could carve out roles for themselves. For instance, Canada is home to several promising startups, such asD-Wave, a leader in annealing computers; whileJapanis using public funds to develop a homegrown quantum computer by March 2023. (For an analysis of the comparative standings and challenges that countries face in quantum computing, please see the recentBCG report.)

Meanwhile, the locus of the quantum computing industry is shifting to the challenges of developing applications and adopting the technology. This shift offers countries, especially the follower nations, an opportunity to catch up with the leaders before its too late. Governments must use four levers in concert to accelerate their quantum sovereignty:

* Lay the foundations.Governments have to invest more than they currently do if they wish to develop quantum systems over time, even as they strike partnerships to bring home the technology in the short run. Once they have secured the hardware, states must create shared infrastructure to scale the industry. The Netherlands, for instance, has set upQuantum Inspire, a platform that provides users with the hardware to perform quantum computations.

* Coordinate the stakeholders.Governments should use funding and influence to coordinate the work of public and private players, as theU.S. Quantum Coordination Office, for instance,does. In addition, policymakers must connect stakeholders to support the technologys development. Thats how the U.S. Department of Energy, for instance, came to partner with the University of Chicago; together, theyve set up anacceleratorto connect startups with investors and scientific experts.

* Facilitate the transition. Governments must support businesss transition to the quantum economy. They should offer monetary incentivessuch as tax credits, infrastructure assistance, no- or low-interest financing, and free landso incumbents will shift to quantum technologies quickly. TheU.K., for instance, hasrecently expanded its R&D tax relief scheme to cover investments in quantum technologies.

* Develop the business talent.Instead of developing only academics and scientists, government policies will have to catalyze the creation of a new breed of entrepreneurial and executive talent that can fill key roles in quantum businesses. To speed up the process, Switzerland, for instance, has helped create amasters programrather than offering only doctoral programs on the subject.

Not all general-purpose technologies affect a countrys security and sovereignty as quantum computing does, but theyre all critical for competitiveness. While many countries talk about developing quantum capabilities, their efforts havent translated into major advances, as in the U.S. and China. Its time every government remembered that if it loses the quantum computing race, its technological independence will erodeand, unlike with Schrdingers cat, theres no doubt that its global competitiveness will atrophy.

ReadotherFortunecolumns by Franois Candelon.

Franois Candelonisa managing director and senior partner at BCG and global director of the BCG Henderson Institute.

Maxime Courtauxis a project leader at BCG and ambassador at the BCG Henderson Institute.

Gabriel Nahasis a data senior scientist at BCG Gamma and ambassador at the BCG Henderson Institute.

Jean-Franois Bobier is a partner & director at BCG.

Some companies featured in this column are past or current clients of BCG.

Read the original post:
The U.S., China, and Europe are ramping up a quantum computing arms race. Heres what theyll need to do to win - Fortune