Man wins competition with AI-generated artwork and some people aren’t happy – The Register

In brief A man won an art competition with an AI-generated image crafted, and some people aren't best pleased about it.

The image, titled Thtre D'opra Spatial, looks like an impressive painting of an opera scene with performers on stage, and an abstract audience in the background with a huge moon-like window of some sort. It was created by Jason Allen, who went through hundreds of iterations of written descriptions fed into text-to-image generator Midjourey before the software emitted the picture he wanted.

He won first prize, and $300, after he submitted a printed version of the image to the Colorado State Fair's fine art competition. His achievement, however, has raised eyebrows and divided opinions.

"I knew this would be controversial," Allen said in the Midjourney Discord server on Tuesday, according to Vice. "How interesting is it to see how all these people on Twitter who are against AI generated art are the first ones to throw the human under the bus by discrediting the human element! Does this seem hypocritical to you guys?"

Washington Post tech reporter Drew Harwell, who covered the brouhaha here, raised an interesting point: "People once saw photography as cheating, too just pushing a button and now we realize the best creations rely on skilled composition, judgment, and tone," he tweeted.

"Will we one day regard AI art in the same way?"

DeepMind has trained virtual agents to play football the soccer kind using reinforcement learning to control their motor and team work skills.

Football is a fine game to test software's planning skills in a physical domain as it requires bots to learn how to move and coordinate their computer body parts alongside others to achieve a goal. These capabilities will prove useful in the future for real robots and will be a necessary part of artificial general intelligence.

"Football is a great domain to explore this very general problem," DeepMind researchers and co-authors of a paper published in Science Robotics this week told The Register. "It requires planning at the level of skills such as tackling, dribbling or passing, but also longer-term concerns such as clearing the ball or positioning.

"Humans can do this without actively thinking at the level of high frequency motor control or individual muscle movements. We don't know how planning is best organized at such different scales, and achieving this with AI is an active open problem for research."

At first, the humanoids move their limbs in a virtual environment randomly and gradually learn to run, tackle, and score using imitation and reinforcement learning over time.

They were pitted against each other in teams of two. You can see a demonstration in the video below.

Youtube Video

It was only a matter of time before someone went and built a viral text-to-image tool to generate pornographic images.

Stable Diffusion is taking the AI world by storm. The software including the source code, model and its weights has been released publicly, allowing anyone with some level of coding skills to tailor their own system to a specific use case. One developer has built and released Porn Pen to the world, with which users can choose a series of tags, like "babe" or "chubby," to generate a NSFW image.

"I think it's somewhat inevitable that this would come to exist when [OpenAI's] DALL-E did," Os Keyes, a PhD candidate at Seattle University, told TechCrunch. "But it's still depressing how both the options and defaults replicate a very heteronormative and male gaze."

It's unclear how this will affect the sex industry, and many are concerned text-to-image tools could be driven to create deepfakes of someone or pushed to produce illegal content. These systems have sometimes struggled to visualize human anatomy correctly.

People have noticed these ML models adding nipples on random parts of the body or sometimes an extra arm or something is poking out somewhere. All of this is rather creepy.

There's a mobile app that claims it can translate the meaning of a cat's meows into plain English using machine-learning algorithms.

Aptly named MeowTalk, the app analyses recordings of cat noises to predict their mood and interprets what they might be trying to say. It tells owners if their pet felines are happy, resting, or hunting, and may translate this into phrases such as "let me rest" or "hey, I'm so happy to see you," for example.

"We're trying to understand what cats are saying and give them a voice" Javier Sanchez, a founder of MeowTalk, told the New York Times. "We want to use this to help people build better and stronger relationships with their cats," he added. Code using machine learning algorithms to decode and study animal communication, however, isn't always reliable.

MeowTalk doesn't interpret the intent of purring very well, and sometimes the text translation of cat noises are very odd. When a reporter picked up her cat and she meowed, the app apparently thought she told her owner: "Hey baby, let's go somewhere private!"

Stavros Ntalampiras, a computer scientist at the University of Milan, who was called to help the MeowTalk founders, admitted that "a lot of translations are kind of creatively presented to the user," and said "it's not pure science at this stage."

Visit link:
Man wins competition with AI-generated artwork and some people aren't happy - The Register

All You Need to Know About Support Vector Machines – Spiceworks News and Insights

A support vector machine (SVM) is defined as a machine learning algorithm that uses supervised learning models to solve complex classification, regression, and outlier detection problems by performing optimal data transformations that determine boundaries between data points based on predefined classes, labels, or outputs. This article explains the fundamentals of SVMs, their working, types, and a few real-world examples.

A support vector machine (SVM) is a machine learning algorithm that uses supervised learning models to solve complex classification, regression, and outlier detection problems by performing optimal data transformations that determine boundaries between data points based on predefined classes, labels, or outputs. SVMs are widely adopted across disciplines such as healthcare, natural language processing, signal processing applications, and speech & image recognition fields.

Technically, the primary objective of the SVM algorithm is to identify a hyperplane that distinguishably segregates the data points of different classes. The hyperplane is localized in such a manner that the largest margin separates the classes under consideration.

The support vector representation is shown in the figure below:

As seen in the above figure, the margin refers to the maximum width of the slice that runs parallel to the hyperplane without any internal support vectors. Such hyperplanes are easier to define for linearly separable problems; however, for real-life problems or scenarios, the SVM algorithm tries to maximize the margin between the support vectors, thereby giving rise to incorrect classifications for smaller sections of data points.

SVMs are potentially designed for binary classification problems. However, with the rise in computationally intensive multiclass problems, several binary classifiers are constructed and combined to formulate SVMs that can implement such multiclass classifications through binary means.

In the mathematical context, an SVM refers to a set of ML algorithms that use kernel methods to transform data features by employing kernel functions. Kernel functions rely on the process of mapping complex datasets to higher dimensions in a manner that makes data point separation easier. The function simplifies the data boundaries for non-linear problems by adding higher dimensions to map complex data points.

While introducing additional dimensions, the data is not entirely transformed as it can act as a computationally taxing process. This technique is usually referred to as the kernel trick, wherein data transformation into higher dimensions is achieved efficiently and inexpensively.

The idea behind the SVM algorithm was first captured in 1963 by Vladimir N. Vapnik and Alexey Ya. Chervonenkis. Since then, SVMs have gained enough popularity as they have continued to have wide-scale implications across several areas, including the protein sorting process, text categorization, facial recognition, autonomous cars, robotic systems, and so on.

See More: What Is a Neural Network? Definition, Working, Types, and Applications in 2022

The working of a support vector machine can be better understood through an example. Lets assume we have red and black labels with the features denoted by x and y. We intend to have a classifier for these tags that classifies data into either the red or black category.

Lets plot the labeled data on an x-y plane, as below:

A typical SVM separates these data points into red and black tags using the hyperplane, which is a two-dimensional line in this case. The hyperplane denotes the decision boundary line, wherein data points fall under the red or black category.

A hyperplane is defined as a line that tends to widen the margins between the two closest tags or labels (red and black). The distance of the hyperplane to the most immediate label is the largest, making the data classification easier.

The above scenario is applicable for linearly separable data. However, for non-linear data, a simple straight line cannot separate the distinct data points.

Heres an example of non-linear complex dataset data:

The above dataset reveals that a single hyperplane is not sufficient to separate the involved labels or tags. However, here, the vectors are visibly distinct, making segregating them easier.

For data classification, you need to add another dimension to the feature space. For linear data discussed until this point, two dimensions of x and y were sufficient. In this case, we add a z-dimension to better classify the data points. Moreover, for convenience, lets use the equation for a circle, z = x + y.

With the third dimension, the slice of feature space along the z-direction looks like this:

Now, with three dimensions, the hyperplane, in this case, runs parallel to the x-direction at a particular value of z; lets consider it as z=1.

The remaining data points are further mapped back to two dimensions.

The above figure reveals the boundary for data points along features x, y, and z along a circle of the circumference with radii of 1 unit that segregates two labels of tags via the SVM.

Lets consider another method of visualizing data points in three dimensions for separating two tags (two different colored tennis balls in this case). Consider the balls lying on a 2D plane surface. Now, if we lift the surface upward, all the tennis balls are distributed in the air. The two differently colored balls may separate in the air at one point in this process. While this occurs, you can use or place the surface between two segregated sets of balls.

In this entire process, the act of lifting the 2D surface refers to the event of mapping data into higher dimensions, which is technically referred to as kernelling, as mentioned earlier. In this way, complex data points can be separated with the help of more dimensions. The concept highlighted here is that the data points continue to get mapped into higher dimensions until a hyperplane is identified that shows a clear separation between the data points.

The figure below gives the 3D visualization of the above use case:

See More: Narrow AI vs. General AI vs. Super AI: Key Comparisons

Support vector machines are broadly classified into two types: simple or linear SVM and kernel or non-linear SVM.

A linear SVM refers to the SVM type used for classifying linearly separable data. This implies that when a dataset can be segregated into categories or classes with the help of a single straight line, it is termed a linear SVM, and the data is referred to as linearly distinct or separable. Moreover, the classifier that classifies such data is termed a linear SVM classifier.

A simple SVM is typically used to address classification and regression analysis problems.

Non-linear data that cannot be segregated into distinct categories with the help of a straight line is classified using a kernel or non-linear SVM. Here, the classifier is referred to as a non-linear classifier. The classification can be performed with a non-linear data type by adding features into higher dimensions rather than relying on 2D space. Here, the newly added features fit a hyperplane that helps easily separate classes or categories.

Kernel SVMs are typically used to handle optimization problems that have multiple variables.

See More: What is Sentiment Analysis? Definition, Tools, and Applications

SVMs rely on supervised learning methods to classify unknown data into known categories. These find applications in diverse fields.

Here, well look at some of the top real-world examples of SVMs:

The geo-sounding problem is one of the widespread use cases for SVMs, wherein the process is employed to track the planets layered structure. This entails solving the inversion problems where the observations or results of the issues are used to factor in the variables or parameters that produced them.

In the process, linear function and support vector algorithmic models separate the electromagnetic data. Moreover, linear programming practices are employed while developing the supervised models in this case. As the problem size is considerably small, the dimension size is inevitably tiny, which accounts for mapping the planets structure.

Soil liquefaction is a significant concern when events such as earthquakes occur. Assessing its potential is crucial while designing any civil infrastructure. SVMs play a key role in determining the occurrence and non-occurrence of such liquefaction aspects. Technically, SVMs handle two tests: SPT (Standard Penetration Test) and CPT (Cone Penetration Test), which use field data to adjudicate the seismic status.

Moreover, SVMs are used to develop models that involve multiple variables, such as soil factors and liquefaction parameters, to determine the ground surface strength. It is believed that SVMs achieve an accuracy of close to 96-97% for such applications.

Protein remote homology is a field of computational biology where proteins are categorized into structural and functional parameters depending on the sequence of amino acids when sequence identification is seemingly difficult. SVMs play a key role in remote homology, with kernel functions determining the commonalities between protein sequences.

Thus, SVMs play a defining role in computational biology.

SVMs are known to solve complex mathematical problems. However, smooth SVMs are preferred for data classification purposes, wherein smoothing techniques that reduce the data outliers and make the pattern identifiable are used.

Thus, for optimization problems, smooth SVMs use algorithms such as the Newton-Armijo algorithm to handle larger datasets that conventional SVMs cannot. Smooth SVM types typically explore math properties such as strong convexity for more straightforward data classification, even with non-linear data.

SVMs classify facial structures vs. non-facial ones. The training data uses two classes of face entity (denoted by +1) and non-face entity (denoted as -1) and n*n pixels to distinguish between face and non-face structures. Further, each pixel is analyzed, and the features from each one are extracted that denote face and non-face characters. Finally, the process creates a square decision boundary around facial structures based on pixel intensity and classifies the resultant images.

Moreover, SVMs are also used for facial expression classification, which includes expressions denoted as happy, sad, angry, surprised, and so on.

In the current scenario, SVMs are used for the classification of images of surfaces. Implying that the images clicked of surfaces can be fed into SVMs to determine the texture of surfaces in those images and classify them as smooth or gritty surfaces.

Text categorization refers to classifying data into predefined categories. For example, news articles contain politics, business, the stock market, or sports. Similarly, one can segregate emails into spam, non-spam, junk, and others.

Technically, each article or document is assigned a score, which is then compared to a predefined threshold value. The article is classified into its respective category depending on the evaluated score.

For handwriting recognition examples, the dataset containing passages that different individuals write is supplied to SVMs. Typically, SVM classifiers are trained with sample data initially and are later used to classify handwriting based on score values. Subsequently, SVMs are also used to segregate writings by humans and computers.

In speech recognition examples, words from speeches are individually picked and separated. Further, for each word, certain features and characteristics are extracted. Feature extraction techniques include Mel Frequency Cepstral Coefficients (MFCC), Linear Prediction Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC), and others.

These methods collect audio data, feed it to SVMs and then train the models for speech recognition.

With SVMs, you can determine whether any digital image is tampered with, contaminated, or pure. Such examples are helpful when handling security-related matters for organizations or government agencies, as it is easier to encrypt and embed data as a watermark in high-resolution images.

Such images contain more pixels; hence, it can be challenging to spot hidden or watermarked messages. However, one solution is to separate each pixel and store data in different datasets that SVMs can later analyze.

Medical professionals, researchers, and scientists worldwide have been toiling hard to find a solution that can effectively detect cancer in its early stages. Today, several AI and ML tools are being deployed for the same. For example, in January 2020, Google developed an AI tool that helps in early breast cancer detection and reduces false positives and negatives.

In such examples, SVMs can be employed, wherein cancerous images can be supplied as input. SVM algorithms can analyze them, train the models, and eventually categorize the images that reveal malign or benign cancer features.

See More: What Is a Decision Tree? Algorithms, Template, Examples, and Best Practices

SVMs are crucial while developing applications that involve the implementation of predictive models. SVMs are easy to comprehend and deploy. They offer a sophisticated machine learning algorithm to process linear and non-linear data through kernels.

SVMs find applications in every domain and real-life scenarios where data is handled by adding higher dimensional spaces. This entails considering factors such as the tuning hyper-parameters, selecting the kernel for execution, and investing time and resources in the training phase, which help develop the supervised learning models.

Did this article help you understand the concept of support vector machines? Comment below or let us know on Facebook, Twitter, or LinkedIn. Wed love to hear from you!

Follow this link:
All You Need to Know About Support Vector Machines - Spiceworks News and Insights

Machine and deep learning are a MUST at the North-West… – Daily Maverick

The last century alone has seen a meteoric increase in the accumulation of data and we are able to store unfathomable quantities of information to help us solve problems known and unknown. At some point the ability to optimally utilise these vast amounts of data will be beyond our reach, but not beyond that of the tools we have made. At the North-West University (NWU), Professor Marelie Davel, director of the research group MUST Deep Learning, and her team are ensuring that our ever-growing data repositories will continue to benefit society.

The teams focus on machine learning and, specifically, deep learning, is creating magic to the untrained eye. Here is why.

Machine learning is a catch-all term for systems that learn in an automated way from their environment. These systems are not programmed with the steps to solve a specific task, but they are programmed to know how to learn from data. In the process, the system uncovers the underlying patterns in the data and comes up with its own steps to solve the specific task, explains Professor Davel.

According to her, machine learning is becoming increasingly important as more and more practical tasks are being solved by machine learning systems: From weather prediction to drug discovery to self-driving cars. Behind the scenes we see that many of the institutions we interact with, like banks, supermarket chains and hospitals, all nowadays incorporate machine learning in aspects of their business. Machine learning makes everyday tools from internet searches to every smartphone photo we take work better.

The NWU and MUST go a step beyond this by doing research on deep learning. This is a field of machine learning that was originally inspired by the idea of artificial neural networks, which were simple models of how neurons were thought to interact in the human brain. This was conceived in the early forties! Modern networks have come a long way since then, with increasingly complex architectures creating large, layered models that are particularly effective at solving human-like tasks, such as processing speech and language, or identifying what is happening in images.

She explains that, although these models are very well utilised, there are still surprisingly many open questions about how they work and when they fail.

We work on some of these open questions, specifically on how the networks perform when they are presented with novel situations that did not form part of their training environment. We are also studying the reasons behind the decisions the networks make. This is important in order to determine whether the steps these models use to solve tasks are indeed fair and unbiased, and sometimes it can help to uncover new knowledge about the world around us. An example is identifying new ways to diagnose and understand a disease.

The uses of this technology are nearly boundless and will continue to grow, and that is why Professor Davel encourages up-and-coming researchers to consider focusing their expertise in this field.

By looking inside these tools, we aim to be better users of the tools as well. We typically apply the tools with industry partners, rather than on our own. Speech processing for call centres, traffic prediction, art authentication, space weather prediction, even airfoil design. We have worked in quite diverse fields, but all applications build on the availability of large, complex data sets that we then carefully model. This is a very fast-moving field internationally. There really is a digital revolution that is sweeping across every industry one can think of, and machine learning is a critical part of it. The combination of practical importance and technical challenge makes this an extremely satisfying field to work in.

She confesses that, while some of the ideas of MUSTs collaborators may sound far-fetched at first, the team has repeatedly found that if the data is there, it is possible to build a tool to use it.

One can envision a future where human tasks such as speech recognition and interaction have been so well mimicked by these machines, that they are indistinguishable from their human counterparts. The famed science fiction writer Arthur C Clarke once remarked that any sufficiently advanced technology is indistinguishable from magic. At the NWU, MUST is doing their part in bringing this magic to life. DM

Author: Bertie Jacobs

Read more:
Machine and deep learning are a MUST at the North-West... - Daily Maverick

Stable Diffusion Goes Public and the Internet Freaks Out – DevOps.com

Welcome to The Long Viewwhere we peruse the news of the week and strip it to the essentials. Lets work out what really matters.

Unless youve been living under a rock for the past week, youll have seen something about Stable Diffusion. Its the new open source machine learning model for creating images from text and even other images.

Like DALL-E and Midjourney, you give it a textual prompt and it generates amazing images (or sometimes utter garbage). Unlike those other models, its open source, so were already seeing an explosion of innovation.

Mark Hachman calls it The new killer app

Fine-tune your algorithmic artAI art is fascinating. Enter a prompt, and the algorithm will generate an image to your specifications. Generally, this all takes place on the Web, with algorithms like DALL-E. [But] Stability.Ai and its Stable Diffusion model broke that moldwith a model that is publicly available and can run on consumer GPUs.For now, Stability.Ai recommends that you have a GPU with at least 6.9GB of video RAM. Unfortunately, only Nvidia GPUs are currently supported. [But] if you own a powerful PC, you can take all the time youd like to fine-tune your algorithmic art and come up with something truly impressive.

From the horses mouth, its Emad Mostaque: Stable Diffusion Public Release

Use this in an ethical, moral and legal mannerIt is our pleasure to announce the public release of stable diffusion. Over the last few weeks we all have been overwhelmed by the response and have been working hard to ensure a safe and ethical release, incorporating data from our beta model tests and community for the developers to act on.As these models were trained on image-text pairs from a broad internet scrape, the model may reproduce some societal biases and produce unsafe content, so open mitigation strategies as well as an open discussion about those biases can bring everyone to this conversation. We hope everyone will use this in an ethical, moral and legal manner and contribute both to the community and discourse around it.

Yeah, right. Have you ever been on the Internet? Kyle Wiggers sounds worried: Deepfakes for all

90% are of womenStable Diffusionis now in use by art generator services like Artbreeder, Pixelz.ai and more. But the models unfiltered nature means not all the use has been completely above board.Other AI art-generating systems, like OpenAIs DALL-E 2, have implemented strict filters for pornographic material. Moreover, many dont have the ability to create art of public figures. Women, unfortunately, are most likely by far to be the victims of this. A study carried out in 2019 revealed that, of the 90% to 95% of deepfakes that are non-consensual, about 90% are of women.

Why is it such a big deal? Just ask Simon Willison:

Science fiction is realStable Diffusion is a really big deal. If you havent been paying attention to whats going onyou really should be. Its similar to models like Open AIs DALL-E, but with one crucial difference: they released the whole thing.In just a few days, there has been an explosion of innovation around it. The things people are building are absolutely astonishing. Generating images from text is one thing, but generating images from other images is a whole new ballgame. Imagine having an on-demand concept artist that can generate anything you can imagine, and can iterate with you towards your ideal result.Science fiction is real now. Machine learning generative models are here, and the rate with which they are improving is unreal. Its worth paying real attention to.

How does it compare to the DALL-E? Just ask Beyondo:

Personally, stable diffusion is better. OpenAI makes it sounds like they created the holy grail of image generation models but their images dont impress anyone who used stable diffusion.

@fabianstelzer did a bunch of comparative tests:

These image synths are like instruments its amazing well get so many of them, each with a unique sound. DALL-Es really great for facial expressions. [Midjourney] wipes the floor with the others when it comes toprompts aiming for textural details. DALL-Es usually my go to for scenes involving 2 or more clear actors. DALL-E and SD being better at photosStable Diffusion can do incredible photosbut you need to be careful to not overload the scene.The moment you put art into a prompt, Midjourney just goes nuts. DALL-Es imperfections look very digital, unlike MJs. When it comes to copying specific styles, SD is absolutely [but] DALL-E wont let you do a Botticelli painting of Trump.

And what of the training data? Heres Andy Baio:

One of the biggest frustrations of text-to-image generation AI models is that they feel like a black box. We know they were trained on images pulled from the web, but which ones? The team behind Stable Diffusion have been very transparent about how their model is trained. Since it was released publicly last week, Stable Diffusion has exploded in popularity, in large part because of its free and permissive licensing.Simon Willison [and I] grabbed the data for over 12 million images used to train Stable Diffusion. [It] was trained off three massive datasets collected by LAION. All of LAIONs image datasets are built off of Common Crawl, [which] scrapes billions of webpages monthly and releases them as massive datasets. Nearly half of the images, about 47%, were sourced from only 100 domains, with the largest number of images coming from Pinterest. WordPress-hosted blogs on wp.com and wordpress.com represented6.8% of all images. Other photo, art, and blogging sites includedSmugmugBlogspotFlickrDeviantArtWikimedia500px, andTumblr.

Meanwhile, how does it work? Letitia Parcalabescu is easy for her to say:

How do Latent Diffusion Models work? If you want answers to these questions, weve got you covered!

You have been readingThe Long ViewbyRichiJennings. You can contact him at@RiCHior[emailprotected].

Image: Stable Diffusion, via Andy Baio (Creative ML OpenRAIL-M; leveled and cropped)

See the original post here:
Stable Diffusion Goes Public and the Internet Freaks Out - DevOps.com

Senior Lecturer / Associate Professor in Fairness in Machine Learning and AI Planning job with UNIVERSITY OF MELBOURNE | 307051 – Times Higher…

Location:ParkvilleRole type:Full time;ContinuingFaculty:Faculty of Engineering and Information TechnologyDepartment/School:School of Computing and Information SystemsSalary:Level C $135,032 $155,698 orLevel D$162,590 $179,123p.a. plus 17% super

The University of Melbourne would like to acknowledge and pay respect to the Traditional Owners of the lands upon which our campuses are situated, the Wurundjeri and Boon Wurrung Peoples, the Yorta Yorta Nation, the Dja Dja Wurrung People. We acknowledge that the land on which we meet and learn was the place of age-old ceremonies, of celebration, initiation and renewal, and that the local Aboriginal Peoples have had and continue to have a unique role in the life of these lands.

About the School of Computing and Information Systems (CIS)

We are international research leaders with a focus on delivering impact and making a real difference in three key areas: data and knowledge, platforms and systems, and people and organisations.

At the School of Computing and Information Systems, you'll find curious people, big problems, and plenty of chances to create a real difference in the world.

To find out more about CIS, visit:http://www.cis.unimelb.edu.au/

About the Role

The Faculty of Engineering and Information Technology (FEIT) is seeking an aspiring academic leader with expertise in algorithms and their fairness in machine learning and/or AI (artificial intelligence) planning, or related fields, for a substantive position within the School of Computing and Information Systems (CIS).

You will join a world-class computer science research group, which has strong links to the Centre for AI & Digital Ethics (CAIDE) and will be expected to collaborate with both alongside other internationally respected groups across artificial intelligence, human-computer interaction, information systems.

You highly ambitious and eager to demonstrate world leading research through publications in key conferences (typified by, but not limited to, FAccT, The Web Conference, KDD, NeurIPS, ICAPS, AAAI, IJACI, ITCS, EC, CHI, CSCW) and in high-quality journals (typified by, but not limited to, ACM TKDD, AIJ, ACM Transactions on Economics and Computation, Proceedings of the National Academy of Sciences, Big Data and Society, AI and Society, AI and Ethics, TCS). You will make a valuable contribution to the School and broader academic community through mentorship, contributions to teaching into various Masters programs related to algorithms, theory, digital ethics and related areas and provide critical leadership in engagement activities including securing grant funding to support your program of research.

This is an exciting opportunity to further develop your academic and leadership profile and be supported to achieve your goals across all pillars of an academic career.

Responsibilities include:

About You

You are an aspiring leader with the ability to build a highly respected reputation in Machine Learning and/or AI Planning, as demonstrated through a significant track record of publications in high-impact peer-reviewed and refereed venues, and invitations to speak at national and international meetings. You are experienced in mentoring both students, colleagues and research teams and demonstrate great initiative in the establishment and nurturing of research projects. Your highly-developed communication and relationship building skills enable you to engage with a diverse range of people and institutions to develop partnerships that positively contribute to strategic initiatives.

You will also have:

For full details of responsibilities and selection criteria, including criteria for a Level D appointment, please refer to the attached position description.

To ensure the University continues to provide a safe environment for everyone, this position requires the incumbent to hold a current and valid Working with Children Check.

About - The Faculty of Engineering and Information Technology (FEIT)

The Faculty of Engineering and Information Technology (FEIT) has been the leading Australian provider of engineering and IT education and research for over 150 years. We are a multidisciplinary School organised into three key areas; Computing and Information Systems (CIS), Chemical and Biomedical Engineering (CBE) and Electrical, Mechanical and Infrastructure Engineering (EMI). FEIT continues to attract top staff and students with a global reputation and has a commitment to knowledge for the betterment of society.

https://eng.unimelb.edu.au/about/join-feit

About the University

The University of Melbourne is consistently ranked amongst the leading universities in the world. We are proud of our people, our commitment to research and teaching excellence, and our global engagement.

Benefits of Working with Us

In addition to having the opportunity to grow and be challenged, and to be part of a vibrant campus life, our people enjoy a range of rewarding benefits:

To find out more, visithttps://about.unimelb.edu.au/careers/staff-benefits.

Be Yourself

We value the unique backgrounds, experiences and contributions that each person brings to our community and encourage and celebrate diversity. First Nations people, those identifying as LGBTQIA+, females, people of all ages, with disabilities and culturally and linguistically diverse people are encouraged to apply. Our aim is to create a workforce that reflects the community in which we live.

Join Us!

If you feel this role is right for you, please apply with your CV and cover letter outlining your interest and experience. Please note that you are not required to provide responses against the selection criteria in the Position Description.

We are dedicated to ensuring barrier free and inclusive practices to recruit the most talented candidates. If you require any reasonable adjustments with the recruitment process, please contact us athr-talent@unimelb.edu.au.

Position Description:0054173_PD_C D in Fairness.pdf

Applications close:Monday 26 September 2022 11:55 PMAUS Eastern Standard Time

Go here to see the original:
Senior Lecturer / Associate Professor in Fairness in Machine Learning and AI Planning job with UNIVERSITY OF MELBOURNE | 307051 - Times Higher...

Model-Agnostic Interpretation: Beyond SHAP and LIME – Geektime

Since machine learning models are statistical models, they naturally leave themselves open to potential errors. For example, Apple cards fair lending fiasco brought into question the inherent discrimination in loan approval algorithms while a project funded by the UK government that used AI to predict gun and knife crime turned out to be wildly inaccurate.

For people to trust machine learning models, we need explanations. It makes sense for a loan to be rejected due to low income, but if a loan gets rejected based on an applicant's zip code, this might indicate theres bias in the model, i.e, it can favour more wealthy areas

When choosing a machine learning algorithm, theres usually a tradeoff between the algorithms interpretability and its accuracy. Traditional methods like decision trees and linear regression can be directly explained, but their ability to provide accurate predictions is limited. More modern methods such as Random Forests and Neural Networks give better predictions but are more difficult to interpret.

In the last few years, we've seen great advances in the interpretation of machine learning models with methods like Lime and SHAP. While these methods do require some background, analyzing the underlying data can offer a simple and intuitive interpretation. For this, we first need to understand how humans reason.

Lets think about the common example of the roosters crow: If you grew up in the countryside, you might know that roosters always crow before the sun rises. Can we infer that the roosters crow makes the sun rise? Its clear that the answer is no. But, why?

Humans have a mental model of reality. We know that if the rooster doesn't crow, the sun rises anyway. This type of reasoning is called counterfactual.

This is the common way in which people make sense of reality. Counterfactual reasoning cannot be scientifically proven. Descartes demon, or the idea of methodological skepticism, illustrates this idea: According to this concept, if Event B happens right after Event A, you can never be sure that there isnt some demon that causes B to happen right after A. The scientific field historically refrained from formalizing any discussion on causality. But, more recently, efforts have been made to create a scientific language that helps us better understand cause and effect. For additional information, be sure to read The Book of Why by Judea Pearl, a prominent computer science researcher and philosopher.

At my company, we have predictive models aimed at an assessment of customers' risk when they apply for a loan. The model uses historical data in a tabular format, in which each customer has a list of meaningful features like payment history, income and incorporation date. Using this data, we predict the customet's level of risk and divide it into six different risk groups (or buckets). We interpret the model's predictions using both local and global explanations, then we use counterfactual analysis to explain our predictions to the business stakeholders.

Local explanations are aimed to explain a single prediction. We replace each features value with the median in the representative population and display the feature that caused the largest change in score through text. In the following example, the third feature is successful repayments, and its median is 0. We calculate new predictions while replacing the original features value with the new value (the median).

Customer_1 had their prediction changed to a reduced risk, and we can devise a short explanation. A higher number of successful repayments improved the customers risk level. Or in its more detailed version: The customer had 3 successful repayments compared to a median of 0 in the population. This caused the risk level to improve from level D to E.

Global explanations are aimed to explain the features direction in the model as a whole. An individual feature value is replaced with one extreme value. For example, this value can be the 95th percentile - i.e., almost the largest value in the sample (95% of the values are smaller than it).

The changes in the scores distribution are calculated and visualized in the chart below. The figure shows the change in the customer's risk level when increasing the value to the 95th percentile.

When increasing the first listed feature (length of delay in payments) to the 95th percentile, a large portion of the customers have their risk level deteriorate one or more levels. A person who reviews this behaviour can easily accept that a delay in payments is expected to cause a worse risk level.

The second feature, monthly balance increase, has a combined effect - a small percentage of the customer's have their risk level deteriorate, while a larger percentage have their risk level improve. This combined effect might indicate theres some interaction between features, although that is not something that can be directly explained through this method.

The third feature, years since incorporation, has a positive effect on the customer's risk level when increasing it to the 95% percentile. Here too, it can be easy to accept that businesses that have been around for longer periods are likely to be more stable and therefore present less risk.

Unlike many other reasoning methods, the counterfactual approach allows for simple and intuitive data explanations that anyone can understand, which can increase the trust we have in machine learning models.

Written by Nathalie Hauser, Manager, Data Science at Bluevine

The rest is here:
Model-Agnostic Interpretation: Beyond SHAP and LIME - Geektime

The U.S., China, and Europe are ramping up a quantum computing arms race. Heres what theyll need to do to win – Fortune

Every country is vying to get a head start in the race to the worlds quantum future. A year ago, the United States, the United Kingdom, and Australia teamed up todevelopmilitary applications of digital technologies, especially quantum computing technologies. That followed the passage in 2019 of the National Quantum Initiative Act by the U.S. Congress, which laid out the countrys plans to rapidly create quantum computing capabilities.

Earlier, Europe launched a $1 billion quantum computing research project, Quantum Flagship, in 2016, and its member states have started building a quantum communications infrastructure that will be operational by 2027. In like vein, Chinas 14th Five Year Plan (2021-2025) prioritizes the development of quantum computing and communications by 2030. In all, between 2019 and 2021 China invested as much as $11 billion, Europe had spent $5 billion, the U.S. $3 billion, and the U.K. around $1.8 billion between to become tomorrows quantum superpowers.

As the scientific development of quantum technologies gathers momentum, creating quantum computers has turned into apriority for nations that wish to gain the next competitive advantage in the Digital Age. Theyre seeking this edge for two very different reasons. On the one hand,quantum technologies will likely transform almost every industry, from automotive and aerospace to finance and pharmaceuticals. These systems could create fresh value of between $450 billion and $850 billion over the next 15 to 30 years, according to recentBCG estimates.

On the other hand, quantum computing systems will pose a significant threat to cybersecurity the world over, as we argued in an earliercolumn.Hackers will be able to use them to decipher the public keys generated by the RSA cryptosystem, and to break through the security of any conventionally-encrypted device, system, or network. It will pose a potent cyber-threat, popularly called Y2Q (Years to Quantum), toindividuals and institutions as well as corporations and country governments. The latter have no choice but to tacklethe unprecedented challenge by developing countermeasures such as post-quantum cryptography, which will itself require the use of quantum systems.

Countries have learned the hard way since the Industrial Revolution that general-purpose technologies, such as quantum computing, are critical for competitiveness. Consider, for instance, semiconductor manufacturing, which the U.S., China, South Korea, and Taiwan have dominated in recent times. When the COVID-19 pandemic and other factors led to a sudden fall in production over the last two years, it resulted in production stoppages andprice increases in over 150 industries, including automobiles, computers, and telecommunications hardware. Many countries, among the members of theEuropean Union, Brazil, India, Turkey, and even the U.S., were hit hard, and are now trying to rebuild their semiconductorsupply chains. Similarly,China manufacturesmost of the worlds electric batteries, with the U.S. contributingonly about 7% of global output. Thats why the U.S. has recently announcedfinancial incentivesto induce business to create more electric battery-manufacturing capacity at home.

Much worse could be in store if countries and companies dont focus on increasing their quantum sovereignty right away. Because the development and deploymentof such systems requires the efforts of the public and private sectors, its important for governments to compare their efforts on both fronts with those of other countries.

The U.S. is expected to be the global frontrunnerin quantum computing, relying on its tech giants, such as IBM and Google, to invent quantum systems as well as numerous start-ups that are developing software applications. The latter attract almost 50% of the investments in quantum computing by venture capital and private equity funds, according toBCG estimates. Although the U.S. government has allocated only $1.1 billion, it has created mechanisms that effectively coordinate the efforts of all its agencies such as the NIST, DARPA, NASA, and NQI.

Breathing down the U.S.s neck: China, whose government has spent more on developing quantum systems than any other. . Those investments have boosted academic research, with China producing over 10% of the worlds research in 2021, according toour estimatessecond only to the U.S. The spillover effects are evident: Less than a year after Googles quantum machine had solved in minutes a calculation that would have taken supercomputers thousands of years to unravel, the University of Science and Technology of China (USTC) had cracked a problem three times tougher. As of September 2021, China hadnt spawned as many startups as the U.S., but it was relying on its digital giants such as Alibaba, Baidu, and Tencent to develop quantum applications.

Trailing only the U.S. and China, the European Unionsquantum computing efforts are driven by its member states as well as the union. The EUsQuantum Flagshipprogram coordinates research projects across the continent, but those efforts arent entirely aligned yet. Several important efforts, such as those ofFranceandGermany,run the risk of duplication or dont exploit synergies adequately. While the EU has spawned several startups that are working on different levels of the technology stacksuch as FinlandsIQM and FrancesPasqalmany seem unlikely to scale because of the shortage of late-stage funding. In fact, the EUs startups have attracted only about one-seventh as much funding as their American peers,according toBCG estimates.

Finally, the U.K. was one of the firstcountries in the world to launch a government-funded quantum computing program. Its counting on itseducational policiesand universities;scholarships for postgraduate degrees; and centers for doctoral training to get ahead. Like the EU, the U.K. also has spawned promising start-ups such asOrca,which announced the worlds smallest quantum computer last year. However, British start-ups may not be able to find sufficient capital to scale, and many are likely to be acquired by the U.S.s digital giants.

Other countries, such as Australia, Canada, Israel, Japan, and Russia are also in the quantum computing race, and could carve out roles for themselves. For instance, Canada is home to several promising startups, such asD-Wave, a leader in annealing computers; whileJapanis using public funds to develop a homegrown quantum computer by March 2023. (For an analysis of the comparative standings and challenges that countries face in quantum computing, please see the recentBCG report.)

Meanwhile, the locus of the quantum computing industry is shifting to the challenges of developing applications and adopting the technology. This shift offers countries, especially the follower nations, an opportunity to catch up with the leaders before its too late. Governments must use four levers in concert to accelerate their quantum sovereignty:

* Lay the foundations.Governments have to invest more than they currently do if they wish to develop quantum systems over time, even as they strike partnerships to bring home the technology in the short run. Once they have secured the hardware, states must create shared infrastructure to scale the industry. The Netherlands, for instance, has set upQuantum Inspire, a platform that provides users with the hardware to perform quantum computations.

* Coordinate the stakeholders.Governments should use funding and influence to coordinate the work of public and private players, as theU.S. Quantum Coordination Office, for instance,does. In addition, policymakers must connect stakeholders to support the technologys development. Thats how the U.S. Department of Energy, for instance, came to partner with the University of Chicago; together, theyve set up anacceleratorto connect startups with investors and scientific experts.

* Facilitate the transition. Governments must support businesss transition to the quantum economy. They should offer monetary incentivessuch as tax credits, infrastructure assistance, no- or low-interest financing, and free landso incumbents will shift to quantum technologies quickly. TheU.K., for instance, hasrecently expanded its R&D tax relief scheme to cover investments in quantum technologies.

* Develop the business talent.Instead of developing only academics and scientists, government policies will have to catalyze the creation of a new breed of entrepreneurial and executive talent that can fill key roles in quantum businesses. To speed up the process, Switzerland, for instance, has helped create amasters programrather than offering only doctoral programs on the subject.

Not all general-purpose technologies affect a countrys security and sovereignty as quantum computing does, but theyre all critical for competitiveness. While many countries talk about developing quantum capabilities, their efforts havent translated into major advances, as in the U.S. and China. Its time every government remembered that if it loses the quantum computing race, its technological independence will erodeand, unlike with Schrdingers cat, theres no doubt that its global competitiveness will atrophy.

ReadotherFortunecolumns by Franois Candelon.

Franois Candelonisa managing director and senior partner at BCG and global director of the BCG Henderson Institute.

Maxime Courtauxis a project leader at BCG and ambassador at the BCG Henderson Institute.

Gabriel Nahasis a data senior scientist at BCG Gamma and ambassador at the BCG Henderson Institute.

Jean-Franois Bobier is a partner & director at BCG.

Some companies featured in this column are past or current clients of BCG.

Read the original post:
The U.S., China, and Europe are ramping up a quantum computing arms race. Heres what theyll need to do to win - Fortune

AWS Takes the Short and Long View of Quantum Computing – HPCwire

It is perhaps not surprising that the big cloud providers a poor term really have jumped into quantum computing. Amazon, Microsoft Azure, Google, and their like have steadily transformed into major technology developers, no doubt in service of their large cloud services offerings. The same is true internationally. You may not know, for example, that Chinas cloud giants Baidu, Alibaba, and Tencent also all have significant quantum development initiatives.

The global cloud crowd tends to leave no technology stone unturned and quantum was no different. Now the big players are all-in. At Amazon, most of the public attention has centered on Braket, its managed quantum services offering that provides tools for learning and access to a variety of quantum computers. Less well-known are Amazons Quantum Solutions Lab, Center for Quantum Computing, and Center for Quantum Networking, the last just launched in June. These four initiatives capture the scope of AWSs wide-ranging quantum ambitions, which include building a fault-tolerant quantum computer.

HPCwire recently talked with Simone Severini, director, quantum computing, AWS, about its efforts. A quantum physicist by training, Severini has been with AWS for ~ four years. He reports to AWSs overall engineering chief, Bill Vass. Noting that theres not much evidence that NISQ era systems will provide decisive business value soon, Severini emphasized quantum computing is a long-term bet. Now is the time for watching, learning, and kicking the tires on early systems.

Amazon Braket provides a huge opportunity for doing that. Customers can keep an eye on the dynamics of the evolution of this technology. We believe theres really not a single path to quantum computing. Its very, very early, right. This is a point that I like to stress, said Severini. I come from academia and have been exposed to quantum computing, one way or another, for over two decades. Its amazing to see the interest in the space. But we also need to be willing to set the right expectations. Its definitely very, very early still in quantum computing.

Launched in 2019, AWS describes Braket as a fully managed quantum computing service designed to help speed up scientific research and software development for quantum computing. This is not unlike what most big quantum computer makers, such D-Wave, IBM and Rigetti also provide.

The premise is to provide all the quantum tools and hardware infrastructure required for new and more experienced quantum explorers to use on a pay-as-you-go basis. Indeed, in the NISQ era, many believe such portal offerings are the only realistic way to deliver quantum computing. Cloud providers (and other concierge-like service providers such Strangeworks, for example) have the advantage of being able to provide access to several different systems.

With Braket, said Severini, Users dont have to sign contracts. Just go there, and you have everything you need to see whats going on [in quantum computing], to program or to simulate, and to use quantum computers directly. We have multiple devices with different [qubit] technologies on the service. The hope is that on one side, customers can indeed keep an eye on the technology on the other side, researchers can run experiments and hopefully contribute to knowledge as well contribute to science.

Braket currently offers access to quantum computers based on superconducting, trapped ion, photonic, and quantum annealers. Presumably other qubit technologies, cold atoms for example, will be added over time.

Interestingly, Braket is also a learning tool for AWS. Its an important exercise for us as well, because in this way, we can envision how quantum computers one day, would really feed a complex, cloud based infrastructure. Today, the workloads on Braket are all experimental, but for us, its important to learn things like security or operator usability, and the management of resources that we do for customers, said Severini. This is quite interesting, because in the fullness of time, a quantum computer could be used together with a lot of other classical resources, including HPC.

On the latter point, there is growing belief that much of quantum computing may indeed become a hybrid effort with some pieces of applications best run on quantum computers and other parts best run on classical resources. Well see. While it is still early days for the pursuit of hybrid classical-quantum computing, AWS launched Amazon Braket Hybrid late year. Heres an excerpt of AWSs description:

Amazon Braket Hybrid Jobs enables you to easily run hybrid quantum-classical algorithms such as the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimization Algorithm (QAOA), that combine classical compute resources with quantum computing devices to optimize the performance of todays quantum systems. With this new feature, you only have to provide your algorithm script and choose a target device a quantum processing unit (QPU) or quantum circuit simulator. Amazon Braket Hybrid Jobs is designed to spin up the requested classical resources when your target quantum device is available, run your algorithm, and release the instances after completion so you only pay for what you use. Braket Hybrid Jobs can provide live insights into algorithm metrics to monitor your algorithm as it progresses, enabling you to make adjustments more quickly. Most importantly, your jobs have priority access to the selected QPU for the duration of your experiment, putting you in control, and helping to provide faster and more predictable execution.

To run a job with Braket Hybrid Jobs, you need to first define your algorithm using either the Amazon Braket SDK orPennyLane. You can also use TensorFlow and PyTorch or create a custom Docker container image. Next, you create a job via the Amazon Braket API or console, where you provide your algorithm script (or custom container), select your target quantum device, and choose from a variety of optional settings including the choice of classical resources, hyper-parameter values, and data locations. If your target device is a simulator, Braket Hybrid Jobs is designed to start executing right away. If your target device is a QPU, your job will run when the device is available and your job is first in the queue. You can define custom metrics as part of your algorithm, which can be automatically reported to Amazon CloudWatch and displayed in real time in the Amazon Braket console. Upon completion, Braket Hybrid Jobs writes your results to Amazon S3 and releases your resources.

The second initiative, Amazon Quantum Solution Lab, is aimed at collaborative research programs; it is, in essence, Amazons professional quantum services group.

They engage in research project with customers. For example, they recently wrote a paper with a team of researchers at Goldman Sachs. They run a very interesting initiative together with BMW Group, something called the BMW Group quantum computing challenge. BMW proposed four areas related to their interests, like logistic, manufacturing, some stuff that related to automotive engineering, and there was a call for a proposal to crowdsource solutions that use quantum computers to address these problems, said Severini.

There were 70 teams, globally, that submitted solutions. I think this is very interesting because [its still early days] and the fact is that quantum computers are not useful in business problems today. They cant [yet] be more impactful than classical computing today. An initiative of this type can really help bridge the real world with the theory. We have several such initiatives, he said.

Building a Fault-Tolerant Computer

Amazons efforts to build a fault-tolerant quantum are based at the AWS Center for Quantum Computing, located in Pasadena, Calif., and run in conjunction with Caltech. We launched this initiative in 2019 but last year, in 2021, we opened a building that we built inside the campus of Caltech, said Severini. Its a state of the art research facility and we are doing research to build an error-corrected, fault tolerant computer, he said.

AWS has settled on semiconductor-based superconducting qubit technology, citing the deep industry knowledge of semiconductor manufacturing techniques and scalability. The challenge, of course, is achieving fault-tolerance. Todays NISQ systems are noisy and error-prone and require near-zero Kelvin temperatures. Severini said simply, There is a lot of scientific challenges still and theres a lot of engineering to be done.

We believe strongly that there are two things that need to be done at this stage. One is improving error rates at the physical level and to invest in material science to really understand on a fundamental level how to build components that have an improvement in with respect to error rates. The second point is [to develop] new qubit architectures for protecting qubits from errors, he said.

This facility includes everything [to do] that. We are doing the full stack. Were building everything ourselves from software to the architecture to the qubits, and the wiring. These are long-term investments, said Severini.

AWS has been relatively quiet in promoting its quantum computer building effort. It has vigorously embraced competing qubit technologies on Braket, and Severini noted that its still unclear how progress will unfold. Some approaches may work well for a particular application but not for others. AWS is tracking all of them, and is including some prominent quantum researchers. For example, John Preskill, the Caltech researcher who coined the term NISQ, is an Amazon Scholar. (Preskill, of course, is fittingly the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology.)

Last February, AWS published a paper in PRX Quantum (Building a fault-tolerant quantum computer using concatenated cat codes) which outlines directional thinking. The abstract is excerpted below:

We present a comprehensive architectural analysis for a proposed fault-tolerant quantum computer based on cat codes concatenated with outer quantum error-correcting codes. For the physical hardware, we propose a system of acoustic resonators coupled to superconducting circuits with a two-dimensional layout. Using estimated physical parameters for the hardware, we perform a detailed error analysis of measurements and gates, includingcnotand Toffoli gates. Having built a realistic noise model, we numerically simulate quantum error correction when the outer code is either a repetition code or a thin rectangular surface code.

Our next step toward universal fault-tolerant quantum computation is a protocol for fault-tolerant Toffoli magic state preparation that significantly improves upon the fidelity of physical Toffoli gates at very low qubit cost. To achieve even lower overheads, we devise a new magic state distillation protocol for Toffoli states. Combining these results together, we obtain realistic full-resource estimates of the physical error rates and overheads needed to run useful fault-tolerant quantum algorithms. We find that with around 1000 superconducting circuit components, one could construct a fault-tolerant quantum computer that can run circuits, which are currently intractable for classical computers. Hardware with 18000 superconducting circuit components, in turn, could simulate the Hubbard model in a regime beyond the reach of classical computing.

The latest big piece of Amazons quantum puzzle is the AWS Center for Quantum Networking, located in Boston. AWS says major news about the new center is forthcoming soon. The quantum networking center, said Severini, is focused on hardware, software, commercial and scientific applications. That sounds like a lot and is perhaps in keeping with Amazons ambitious quantum programs overall.

The proof of all these efforts, as the saying goes, will be in the pudding.

Stay tuned.

Feature Image:A microwave package encloses the AWS quantum processor. The packaging is designed to shield the qubits from environmental noise while enabling communication with the quantum computers control systems. Source: AWS

More here:
AWS Takes the Short and Long View of Quantum Computing - HPCwire

India witnessing growing interest in quantum computing: IBM – The Hindu

Quantum computers could open the door to new scientific discoveries, life-saving drugs, and improvements in supply chains, logistics and the modelling of financial data

Quantum computers could open the door to new scientific discoveries, life-saving drugs, and improvements in supply chains, logistics and the modelling of financial data

India has been witnessing growing interest in quantum computing, with students, developers, and academia actively participating. Consequently, the country is emerging as a talent hub for quantum computing, said Sandip Patel, MD, IBM India/South Asia region, in an interview. Edited excerpts

Quantum computing is an exciting new technology that will shape our world of tomorrow by providing us with an edge and a myriad of possibilities. Quantum computing is a fundamentally different way of processing information compared to todays classical computing systems. While todays classical computers store information as binary 0 and 1 states, quantum computers draw on the fundamental laws of nature to carry out calculations using quantum bits. Unlike a bit that has to be a 0 or a 1, a qubit can be in a combination of states, which allows for exponentially larger calculations and gives them the potential to solve complex problems which even the most powerful classical supercomputers are not capable of.

Quantum computers tap into the quantum mechanical phenomenon to manipulate information and are expected to shed light on processes of molecular and chemical interactions, address difficult optimisation problems, and boost the power of artificial intelligence. Advances like these could open the door to new scientific discoveries, life-saving drugs, and improvements in supply chains, logistics and the modelling of financial data. IBM today is actively working with major corporations and governments, to help advance their quantum roadmaps, and help grows their pool of quantum talent to make quantum computing practical for the benefit of science, industry and society.

In India, we are witnessing a growing interest in quantum computing with active participation (amongst the highest) from students, developers, and academia in various initiatives like the IBM Quantum Challenge, IBM Quantum Summer School, Qiskit Challenge-India (Qiskit is an open-source software development kit built by IBM for the quantum developer community), and so on. We also have a growing community of Qiskit Advocates and IBM Quantum Ambassadors in India. Furthermore, we regularly organise India-focused programmes such as Qiskit India Week of Quantum, which celebrated women in quantum to kickstart their journeys in quantum, and was attended by almost 300 students. The Qiskit textbook is available in Tamil, Bengali and Hindi and was accessed more than 30,000 times by students in India in 2021 alone. We see India as a talent hub for quantum computing skills that is crucial for growing and maintaining such an interdisciplinary field.

Academia plays an important role in building skills for any deep technology including quantum. Hence, last May, we announced our collaboration with leading educational institutions in India through the IBM Quantum Educators Programme. The faculty and students of these institutions will be able to access IBM Quantum systems, quantum learning resourcesand, quantum tools over IBM Cloud for educational purposes. This allows them to work on actual quantum computers and program them using the Qiskit open-source framework. In partnership with the Indian Institute of Technology Madras, IBM conducted a course on Quantum Computing on the NPTEL platform last year, which had more than 10,000 participants. We are also collaborating with academia for joint research on quantum computing and recently, one of the research papers got accepted at a top Physics Conference.

India is poised to play a pivotal role in the quantum technology revolution globally. IBM is committed to helping India advance its quantum agenda by developing the talent and skills landscape and building an ecosystem with industry, business, academia and government. We are counting on the vibrant Indian talent and expertise to help solve some of the most pressing challenges. As per our quantum roadmap announced in 2021, IBM debuted its first 127-qubit processor. In 2022, IBM extended its quantum roadmap even further to clearly lay out how we will blaze a path towards frictionless quantum computing. This expanded roadmap includes our plans to build a 4,000+qubit processor by 2023, along with significant milestones to build an intelligent quantum software orchestration platform that will abstract away the noise and complexity of quantum machines, and allow large and complicated problems to be easily broken apart and solved across a network of quantum and classical systems. Once realised, this era of quantum-centric supercomputing will open up new, large, and powerful computational spaces for industries globally.

In India, we have a strong team working across research, development, and consulting, working closely with academia, industry, and the public sector. Our team is working to support and accelerate Indias national quantum mission and is participating in building a strong quantum ecosystem as that is crucial for succeeding. The team has been constantly growing to support the needs of the Indian ecosystem and is only expected to grow even further in the coming years as it supports more and more customers to take their quantum journey. We have quantum scientists and engineers around the world conducting fundamental research to improve the technology, as well as collaborating with our partners to advance toward practical applications with a quantum advantage for science and business. Quantum requires multidisciplinary skills and IBM has the best scientists and engineers working together to improve the technology and drive applications of importance to the industry.

Read more:
India witnessing growing interest in quantum computing: IBM - The Hindu

Quantum Computing Market to Expand by 500% by 2028 | 86% of Investments in Quantum Computing Comes from 4 countries – GlobeNewswire

Westford, USA, Aug. 30, 2022 (GLOBE NEWSWIRE) -- Quantum computers touted as next big thing in computing. Major reliance on quantum computers could mean we're soon entering a new era of artificial intelligence, ubiquitous sensors, and more efficient drug discovery. While quantum computers are still in the earliest stages of development, growing interest in their capabilities means that they are likely to become a central part of future computing systems. This has created a growing demand for quantum computing market and software, with providers already reporting strong demand from major customers.

The promise of quantum computing is that it can solve complex problems much faster than traditional computers. This is because quantum computers are able to exploit the properties of subatomic particles such as photons, which are able to ferry information around extremely fast. So far, quantum computing market has been witnessing a demand coming mainly for scientific and research purposes.

However, this is set to change soon as there is growing demand for quantum computers market for various applications such as artificial intelligence (AI), machine learning and data analytics. Artificial intelligence (AI) is one application that could benefit greatly from the speed and accuracy of quantum computing. AI relies on algorithms that are trained on large data sets and are able to learn and improve upon their skills with repeated use. However, classical computer databases can take hours or even days to train an AI algorithm.

Get sample copy of this report:

https://skyquestt.com/sample-request/quantum-computing-market

Only 4 Countries are Responsible for 86% of Total Funding Since 2001

Quantum computing market is heating up. Companies like Google and IBM are racing to develop the technology, which could one day lead to massive improvements in artificial intelligence and other areas of cybersecurity. As per SkyQuests analysis, $1.9 billion public funding was announced in the second half of the year 2021, which, in turn, took the total global funding to $31 billion from year 2001. It was also observed that most of the private and public funding is coming from the US only, which account for around 49% of the private fundings, which is followed by UK (17%), Canada (14%), and China (6%).

In 2021, the global quantum computing market witnessed an investment of around $3 billion, out of which $1.9 billion came in the second of the year. All this investment is coming from both private and public domain to feast on the upcoming opportunity of generating around $41 billion revenue by the year 2040 at a CAGR of more than 30%. The market is projected to experience a significant surge in the demand for quantum sensing and Quantum communication in the years to come. As a result, investors have started pouring money to take advantage of rapidly expanding field. For instance, in 2021 alone, $1.1 billion out of $3 billion were invested in these two technologies. To be precise, $400 million and $700 million respectively.

SkyQuest has done deep study on public and private investment coming into global quantum computing market. This will help the market participants in understanding who are the major investors, what is their area of interest, what makes them to invest in the technology, investors profile analysis, investment pockets, among others.

IonQ, Rigetti, and D-Wave are Emerging Players in Global Quantum Computing Market

As quantum computing market becomes more mainstream, companies like IonQ, Rigetti and D-Wave are quickly proving they are the top emerging players in the field. IonQ is has been working on developing ionic quantum computer technology for several years now. IonQs flagship product is the IonQ One, which is a single-core quantum computer that can process quantum information.

The IonQ One has already been deployed at a number of institutions around the global quantum computing market including NASA.

Rigetti is another company that has been making significant strides in the development of quantum computing technology. Rigettis flagship product is the Rigetti Quilter, which is a scalable two-qubit quantum computer. The Rigetti Quilter is currently undergoing Phase II testing at NASAs Ames Research Center. D-Wave has also been making significant progress in the development of quantum computing technology. D-Waves flagship product is the D-Wave Two, which is a five-qubit quantum computer. The D-Wave Two was recently deployed at Google physicists to help accelerate the discovery of new phenomena in physics.

Browse summary of the report and Complete Table of Contents (ToC):

https://skyquestt.com/report/quantum-computing-market

Regetti has secured a total funding of around $298 million through 11 rounds until 2022 in the global quantum computing market. As per our analysis, the company has secured its last funding through post IPO equity. Wherein, Bessemer Venture Partners and Franklin Templeton Investments are the major investor in the company.

As per SkyQuests findings, these three organizations have collectively generated revenue of around $32 million in 2021 with market cap of more than $3 billion. However, at the same time, they are facing heavy loss. For instance, in 2021, they faced collective loss of over $150 million. Our observation also noticed that billions of dollars are poured into building the quantum computers, but most of the market players are not earning much in revenue in terms of ROI.

SkyQuest has published a report on global quantum computing market and have tracked all the current developments, market revenue, companys growth plans and strategies, their ROI, SWOT analysis, and value chain analysis. Apart from this, the provides insights about market dynamics, competitive landscape, market share analysis, opportunities, trends, among others.

Machine Learning Generated Revenue of Over $189 Million in 2021

Today, machine learning is heavily used for training artificial intelligence systems using data. Quantum computing market can help to speed up the process of training these systems by vastly increasing the amount of data that can be processed. This potential advantage of quantum computing is the ability to perform Fast Fourier Transform (FFT) calculations millions of times faster than classical computers. This is important for tasks like image processing and machine learning, which rely on fast FFT algorithms for comparing data sets.

A huge potential of quantum computing market has led to the development of several machine learning applications that use quantum computers. Some of these applications include fraud detection, drug discovery, and speech recognition. As per SkyQuest, fraud detection and drug discovery market were valued at around $25.1 billion and $75 billion, respectively. This represents a huge revenue opportunity for quantum computing market.

This technology has been used for a variety of purposes, including predicting the stock market and automating tasks such as decision making and recommendations. In machine learning, generating revenue is a major challenge through traditional processing. Wherein, traditional computer processing can only handle a small amount of data at a time. This limits how much data can be used in machine learning projects, which in turn limits the accuracy of the predictions made by the ANNs.

Quantum computing solves this problem by allowing computers to perform multiple calculations at the same time. This makes it possible to process vast amounts of data and make accurate predictions. As a result, quantum computing has already begun to revolutionize machine learning market.

SkyQuest has prepared a report on global quantum computing market. The report has segmented the market by application and done in-depth analysis of each application in revenue generation, market forecast, factors responsible for growth, and top players by applications, among others. The report would help to understand the potential of global market by application and understand how other players performing and generating revenue in each segment.

Speak to Analyst for your custom requirements:

https://skyquestt.com/speak-with-analyst/quantum-computing-market

Top Development in Global Quantum Computing Market

Top Players in Global Quantum Computing Market

Related Reports in SkyQuests Library:

Global Silicon Photonics Market

Global Data Center Transformer Market

Global Wireless Infrastructure Market

Global Cable Laying Vessel Market

Global Digital Twin Market

About Us:

SkyQuest Technologyis leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.

Address:

1 Apache Way, Westford, Massachusetts 01886

Phone:

USA (+1) 617-230-0741

Email:sales@skyquestt.com

LinkedInFacebookTwitter

See the original post:
Quantum Computing Market to Expand by 500% by 2028 | 86% of Investments in Quantum Computing Comes from 4 countries - GlobeNewswire