The Secret Internet of TERFs – The Atlantic

Finally, Fain and the others settled on an open-source platform called Throat, run by the Argentinian developer Ramiro Bou. Throat was created in 2016 as an alternative to Voat, another Reddit alternative, which was hosting many of the most disgusting former subreddits and had already become unusably toxicas might be expected of any site branded as a home for conversation too disgusting for 2015 Reddit. When I asked Bou about Ovarits use of his code, he told me, Theyre nice people, and that theyre currently one of the most active communities on Throat.

So r/GenderCritical set up shop on a new instance of an alternative to an alternative to Reddit. Ovarit looks exactly like Reddit, except its purple, and subreddits are called circles. There is a circle called Cancelled, which is specifically for talking about attempts to silence those who speak out against the queercult. There is a circle called TransLogic, which is specifically for talking about misogynistic and illogical things trans activists say and do. There are general-interest circles for talking about books, television, science, and knitting. There is a circle called Radfemmery, which is for memes and jokes about how much the people in it do not like trans women.

The tone of the discussions in most of the circles is insular and defensive. Much of it is about the way Big Tech is censoring radical-feminist thought by driving wombyna deliberately exclusionary term that prizes women with female reproductive organsoff of their platforms, as well as the way the mainstream media has been taken over by a tiny minority of men, which is how Ovarits members refer to trans women. The plight of J. K. Rowling is revisited often.

In a practice carried over from Reddit, members are encouraged to share their conversion stories, which they confusingly call their peak trans moments. In a typical exchange, one woman explains that she came to Ovarit after dragging herself out of a trans-rights-oriented Tumblr community and falling down a YouTube rabbit hole; another replies that her story is extremely similar, right down to her discomfort with her previous social circles expectation that she be supportive of men in lipstick. (Many of these stories are told with a sense of excitement, guilt, fear its disturbing but thrilling, Lavery, the UC Berkeley professor, told me. All the usual stuff that people who get involved in extremist groups find.) The users joke and bicker, like all political groups, and then they come back togetherbonded by their shared experience of being unwelcome most anywhere else.

So far, the only major difference between Ovarit and r/GenderCritical is that here, nobody challenges the members. There are no outsider trolls butting into the conversation to tell them that theyre wrong. On Reddit, some women were uncomfortable being totally candid, Fain told me. But here, they can be themselves. It was really hard to be on Reddit as a woman, she said. Now on Ovarit Its a big breath of fresh air.

Read the original post:

The Secret Internet of TERFs - The Atlantic

Show Your Work: D-Wave Opens the Door To Performance Comparisons Between Quantum Computing Architectures – GlobeNewswire

BURNABY, British Columbia, Dec. 08, 2020 (GLOBE NEWSWIRE) -- D-Wave Systems Inc., the leader in quantum computing systems, software, and services, today launched a first-of-its-kind cross-system software tool providing interoperability between quantum annealing and gate model quantum computers. The open-source plugin allows developers to easily map quadratic optimization inputs in IBMs Qiskit format onto D-Waves quadratic unconstrained binary optimization (QUBO) format and solve the same input on any quantum system supported in Qiskit. The code is available for free as a stand-alone package in GitHub and marks a major industry milestone: the ability to use, test, solve and compare real applications with both gate-model and annealing quantum computers. For the first time, developers and forward-thinking businesses can have a real assessment of the benefits of different systems on their applications.

Interoperability is a critical step in the maturation of transformative technologies. Until now, there hasnt been a convenient way to send the same problems to solvers on both gate and D-Wave systems, or to obtain head-to-head comparisons of results from the two different quantum computing systems. Before today, using a different quantum computing vendors hardware and software required significant investment to familiarize developers with code, solvers, and SDKs.

D-Waves industry-first open-source package removes those barriers. Qiskit users can nowsubmit Ising Hamiltoniansto the D-Wave quantum computer, in addition to any gate model system Qiskit supports. Now, cross-paradigm transparency and comparison will give quantum developers the flexibility to try different systems, while providing businesses with key insights into performance so they can identify, build, and scale quantum applications.

The company also called for users to publish their work.

In order for the quantum computing ecosystem to fully mature, the developer and business communities alike need access to diverse quantum systems and the ability to compare cross-architectural performance, said Alan Baratz, CEO, D-Wave. The next few years will bring a proliferation of quantum applications, and companies must be able to make informed decisions about their quantum computing investment and initiatives to stay competitive. Weve moved beyond measures that explore does the system work? Instead, enterprises want to benchmark which systems add the most value to their businesses. Were opening the door to this and we encourage users of the tool to share their work and publish their results.

The news is in line with D-Waves ongoing mission to provide practical quantum computing via access to the most powerful quantum hardware, software, and tools. In 2018, D-Wave brought the Leap quantum cloud service and open-source Ocean SDK to market. In February 2020, Leap expanded to include new hybrid solver services to solve real-world, business-sized problems. At the end of September, D-Wave made available the Advantage quantum system, with more than 5000 qubits, 15-way qubit connectivity, and expanded hybrid solver services that can run problems with up to one million variables. The combination of the computing power of Advantage and the scale to address real-world problems with the hybrid solver services in Leap enables businesses to run performant, real-time, hybrid quantum applications for the first time. And with the new cross-system software tool, now users can benchmark their applications across annealing and gate model systems, to further understand and benefit from performance comparisons.

To download and install the cross-paradigm integration plugin for free, click here.

As part of its commitment to enabling businesses to build in-production quantum applications, the company also introduced D-Wave Launch, a jump-start program for businesses who want to get started building hybrid quantum applications today but may need additional support.

About D-Wave Systems Inc.

D-Wave is the leader in the development and delivery of quantum computing systems, software and services and is the worlds first commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing for the world. We do this by delivering customer value with practical quantum applications for problems as diverse as logistics, artificial intelligence, materials sciences, drug discovery, cybersecurity, fault detection, and financial modeling. D-Waves systems are being used by some of the worlds most advanced organizations, including NEC, Volkswagen, DENSO, Lockheed Martin, USC, and Los Alamos National Laboratory. With headquarters near Vancouver, Canada, D-Waves US operations are based in Palo Alto, CA and Bellevue, WA. D-Wave has a blue-chip investor base including PSP Investments, Goldman Sachs, BDC Capital, NEC Corp., and In-Q-Tel. For more information, visit: http://www.dwavesys.com.

ContactD-Wave Systems Inc.dwave@launchsquad.com

See original here:

Show Your Work: D-Wave Opens the Door To Performance Comparisons Between Quantum Computing Architectures - GlobeNewswire

Show Your Work: D-Wave Opens the Door to Performance Comparisons Between Quantum Computing Architectures – HPCwire

BURNABY, British Columbia, Dec. 9, 2020 D-Wave Systems Inc., a leader in quantum computing systems, software, and services, has launched a first-of-its-kind cross-system software tool providing interoperability between quantum annealing and gate model quantum computers. The open-source plugin allows developers to easily map quadratic optimization inputs in IBMs Qiskit format onto D-Waves quadratic unconstrained binary optimization (QUBO) format and solve the same input on any quantum system supported in Qiskit. The code is available for free as a stand-alone packagein GitHub and marks a major industry milestone: the ability to use, test, solve and compare real applications with both gate-model and annealing quantum computers. For the first time, developers and forward-thinking businesses can have a real assessment of the benefits of different systems on their applications.

Interoperability is a critical step in the maturation of transformative technologies. Until now, there hasnt been a convenient way to send the same problems to solvers on both gate and D-Wave systems, or to obtain head-to-head comparisons of results from the two different quantum computing systems. Before today,using a different quantum computing vendors hardware and software required significant investment to familiarize developers with code, solvers, and SDKs.

D-Waves industry-first open-source package removes those barriers.Qiskit users can nowsubmit Ising Hamiltoniansto the D-Wave quantum computer, in addition to any gate model system Qiskit supports.Now, cross-paradigm transparency and comparison will give quantum developers the flexibility to try different systems, while providing businesses with key insights into performance so they can identify, build, and scale quantum applications.

The company also called for users to publish their work.

In order for the quantum computing ecosystem to fully mature, the developer and business communities alike need access to diverse quantum systems and the ability to compare cross-architectural performance, said Alan Baratz, CEO, D-Wave. The next few years will bring a proliferation of quantum applications, and companies must be able to make informed decisions about their quantum computing investment and initiatives to stay competitive. Weve moved beyond measures that explore does the system work? Instead, enterprises want to benchmark which systems add the most value to their businesses. Were opening the door to this and we encourage users of the tool to share their work and publish their results.

The news is in line with D-Waves ongoing mission to provide practical quantum computing via access to the most powerful quantum hardware, software, and tools. In 2018, D-Wave brought theLeap quantum cloud service and open-source Ocean SDK to market. In February 2020, Leap expanded to include new hybrid solver services to solve real-world, business-sized problems. At the end of September, D-Wave made available the Advantage quantum system, with more than 5000 qubits, 15-way qubit connectivity, and expanded hybrid solver services that can run problems with up to one million variables. The combination of the computing power of Advantage and the scale to address real-world problems with the hybrid solver services in Leap enables businesses to run performant, real-time, hybrid quantum applications for the first time. And with the new cross-system software tool, now users can benchmark their applications across annealing and gate model systems, to further understand and benefit from performance comparisons.

To download and install the cross-paradigm integration plugin for free, clickhere.

As part of its commitment to enabling businesses to build in-production quantum applications, the company also introducedD-Wave Launch, a jump-start program for businesses who want to get started building hybrid quantum applications today but may need additional support.

About D-Wave Systems Inc.

D-Wave is a leader in the development and delivery of quantum computing systems, software and services and is the worlds first commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing for the world. We do this by delivering customer value with practical quantum applications for problems as diverse as logistics, artificial intelligence, materials sciences, drug discovery, cybersecurity, fault detection, and financial modeling. D-Waves systems are being used by some of the worlds most advanced organizations, including NEC, Volkswagen, DENSO, Lockheed Martin, USC, and Los Alamos National Laboratory. With headquarters near Vancouver, Canada, D-Waves US operations are based in Palo Alto, CA and Bellevue, WA. D-Wave has a blue-chip investor base including PSP Investments, Goldman Sachs, BDC Capital, NEC Corp., and In-Q-Tel. For more information, visit: http://www.dwavesys.com.

Source: D-Wave Systems Inc.

Continue reading here:
Show Your Work: D-Wave Opens the Door to Performance Comparisons Between Quantum Computing Architectures - HPCwire

Why Cybersecurity And The Quantum Threat Should Be A Priority In Your Company’s Agenda – Benzinga

01 Communique is one of the sponsors for the upcoming Benzinga Global Small Cap Conference set to take place on December 8-9, 2020.

We are in the early days of a new breed of computers with unprecedented power: quantum computers. Their unique capabilities will create opportunities for innovation in every industry. Yet, with this comes The Quantum Threat.

Current cybersecurity technologies can only protect against conventional computers and forms of hacking. Soon, cyber-attacks will be conducted through quantum computers. 01 Communique Laboratory Inc (OTCQB: OONEF) (TSXV:ONE) has been monitoring quantum computers' evolution since its infancy stage.

So, in staying ahead of this new threat, here is what you need to know and how you can be prepared.

Quantum computers are ultra-high-speed computers with a million times the processing speed than a conventional supercomputer. Conventional computers use long strings of "bits," which encode either a zero or a one. A quantum computer uses quantum bits or qubits. It is basically harnessing and exploiting the laws of quantum mechanics to process information.

The technology is in its early stages; however, the threat is here and is extremely real. Quantum computing will make the security of your data, communications, and even blockchains fatally unprotected. Right now, this threat is on every organization's top list, and those who look beyond today's challenges are proactively planning for the imminent danger brought by their advent.

For Andrew Cheung, CEO of 01 Communique, the road against the future quantum threat began over ten years ago.

"We feel like Wayne Gretzky skating to meet the puck by anticipating Q-Day and work with our partners to embrace it when everyone is desperately looking for a quantum-safe solution," said Cheung.

Quantum computing can shorten the time it would take to resolve an encryption problem estimated in a hundred years to almost seconds.

01 COMMUNIQUE has identified these risks and has created solutions to protect against them. Most of the current encryption used by the government today is based on prime number factorization."Becoming quantum-safe 2-years-too-early or 2-years-too-late is everything," said Cheung.

The key to creating quantum-safe encryption lies in mathematics. 01 Communique's advanced post-quantum cryptography technologies will guard against cyberattacks of conventional computers as well as future attacks from quantum computers, so data can be safe not only now but also in the quantum future. The company's cryptographic technology, IronCAP, operates on conventional computer systems to protect platforms today and in the world of quantum computers.

"Quantum advancements in the cybercommunity have led to the birth of IronCAP. We are providing tomorrow's cybersecurity, today," said Cheung.

The company's cybersecurity business unit focuses on post-quantum cybersecurity with the development of its IronCAP technology. IronCAP's patent-pending cryptographic system is an advanced Goppa code-based post-quantum cryptographic technology that can be implemented on a conventional computer system and can also safeguard against attacks in the quantum future. It is designed to protect users and enterprises from illegitimate and malicious means of gaining access to data faster, more securely than current standard cybersecurity.

With this threat also comes The Quantum Race between the open and the closed world. Quantum-safe protection is being considered now by large organizations foreseeing potential threats. Companies like IBM Common Stock (NYSE: IBM), Alphabet Inc (NASDAQ: GOOGL), Amazon.com, Inc. (NASDAQ: AMZN), and Honeywell International Inc. (NYSE: HON) are upfront with quantum computers available.

01 Communiques current strategic partnerships include CGI Inc (TSE: GIB.A), PwC, Hitachi, and ixFintech. The Company expects to add at least eight more over the next 12 to 18 months.

On April 23, 2020, 01 Communique made commercially available IronCAP X, a new cybersecurity product for end-to-end, quantum-safe email/file encryption. IronCAP X delivers each encrypted message end-to-end to the recipients such that only the intended recipients can decrypt and read the message. Consumers' individual messages are protected, eliminating the hackers' incentive to attack email providers' email servers.

During the company's third-quarter, 01 Communique increased its revenue resulting in close to breakeven financial results, and added capital to allow the company to allot a further $938,000 to continue advancing the growth of their business and the development of products based on IronCAP technology. 01 Communique is well-funded and debt-free, offering both the encryption engine and a vertical solution. Uncrackable security validated through a month-long hackathon.

For more information, visit the 01 Communique website at http://www.ironcap.ca and http://www.01com.com.

2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

See original here:
Why Cybersecurity And The Quantum Threat Should Be A Priority In Your Company's Agenda - Benzinga

This measurement tool could help settle the quantum supremacy debate once and for all – TechRadar

Assessing the performance and superiority of quantum computers when compared to traditional computers can be quite difficult which is why the IT services firm Atos has introduced Q-score.

The company's Q-score measures how effective a quantum system is at handling the kinds of real-life problems that cannot be solved by today's traditional computers as opposed to simply measuring the theoretical performance of a quantum computer.

Google, IBM, Honeywell and other organizations currently developing quantum computers all have one goal in mind, to achieve quantum supremacy. This concept was originally put forth by Caltech professor John Preskill and in order to reach it, a company would need to demonstrate that a quantum computer could do something that today's classical computers cannot.

Atos CEO Elie Girard explained why the company developed Q-score in a press release, saying:

Faced with the emergence of a myriad of processor technologies and programming approaches, organizations looking to invest in quantum computing need a reliable metrics to help them choose the most efficient path for them. Being hardware-agnostic, Q-score is an objective, simple and fair metrics which they can rely on. Since the launch of Atos Quantum in 2016, the first quantum computing industry program in Europe, our aim has remained the same: advance the development of industry and research applications, and pave the way to quantum superiority.

The number of qubits (quantum units) found in a quantum computer is the most common figure of merit used today to assess the performance of quantum systems. However, qubits are volatile and vary greatly in quality (speed, stability, connectivity, etc.) from one quantum technology to another which makes them an imperfect benchmark tool.

By focusing on the ability of a quantum computer to solve well-known combinatorial optimization problems, Atos' Q-score will provide research centers, universities and businesses with explicit, reliable, objective and comparable results when solving real-world optimization problems.

The company's Q-score relies on a standard combination optimization problem known as the Max-Cut Problem to provide a frame of reference for comparing performance scores while maintaining uniformity. A quantum system's Q-score is then calculated based on the number of variables within a problem that a quantum technology can optimize. For example if a system can optimize 23 variables, it would receive a Q-score of 23.

Atos will publish an annual list of the most powerful quantum processors in the world based on Q-score and the first report, which will arrive in 2021, will include self-assessments provided by manufacturers. The company will also release a free software kit that enables Q-score to be run on any processor in the first quarter of next year.

Continue reading here:
This measurement tool could help settle the quantum supremacy debate once and for all - TechRadar

Mapping quantum structures with light to unlock their capabilities – University of Michigan News

A new tool that uses light to map out the electronic structures of crystals could reveal the capabilities of emerging quantum materials and pave the way for advanced energy technologies and quantum computers, according to researchers at the University of Michigan, University of Regensburg and University of Marburg.

A paper on the work is published in Science.

Applications include LED lights, solar cells and artificial photosynthesis.

Quantum materials could have an impact way beyond quantum computing, said Mackillo Kira, professor of electrical engineering and computer science at the University of Michigan, who led the theory side of the new study. If you optimize quantum properties right, you can get 100% efficiency for light absorption.

Mackillo Kira

Silicon-based solar cells are already becoming the cheapest form of electricity, although their sunlight-to-electricity conversion efficiency is rather low, about 30%. Emerging 2D semiconductors, which consist of a single layer of crystal, could do that much betterpotentially using up to 100% of the sunlight. They could also elevate quantum computing to room temperature from the near-absolute-zero machines demonstrated so far.

New quantum materials are now being discovered at a faster pace than ever, said Rupert Huber, professor of physics at the University of Regensburg in Germany, who led the experimental work. By simply stacking such layers one on top of the other under variable twist angles, and with a wide selection of materials, scientists can now create artificial solids with truly unprecedented properties.

Rupert Huber

The ability to map these properties down to the atoms could help streamline the process of designing materials with the right quantum structures. But these ultrathin materials are much smaller and messier than earlier crystals, and the old analysis methods dont work. Now, 2D materials can be measured with the new laser-based method at room temperature and pressure.

The measurable operations include processes that are key to solar cells, lasers and optically driven quantum computing. Essentially, electrons pop between a ground state, in which they cannot travel, and states in the semiconductors conduction band, in which they are free to move through space. They do this by absorbing and emitting light.

The electrons absorb laser light and set up momentum combs (the hills) spanning the energy valleys within the material (the red line). When the electrons have an energy allowed by the quantum mechanical structure of the materialand also touch the edge of the valleythey emit light. This is why some teeth of the combs are bright and some are dark. By measuring the emitted light and precisely locating its source, the research mapped out the energy valleys in a 2D crystal of tungsten diselenide. Image credit: Markus Borsch, Quantum Science Theory Lab, University of Michigan.

The quantum mapping method uses a 100 femtosecond (100 quadrillionths of a second) pulse of red laser light to pop electrons out of the ground state and into the conduction band. Next the electrons are hit with a second pulse of infrared light. This pushes them so that they oscillate up and down an energy valley in the conduction band, a little like skateboarders in a halfpipe.

The team uses the dual wave/particle nature of electrons to create a standing wave pattern that looks like a comb. They discovered that when the peak of this electron comb overlaps with the materials band structureits quantum structureelectrons emit light intensely. That powerful light emission along, with the narrow width of the comb lines, helped create a picture so sharp that researchers call it super-resolution.

By combining that precise location information with the frequency of the light, the team was able to map out the band structure of the 2D semiconductor tungsten diselenide. Not only that, but they could also get a read on each electrons orbital angular momentum through the way the front of the light wave twisted in space. Manipulating an electrons orbital angular momentum, known also as a pseudospin, is a promising avenue for storing and processing quantum information.

In tungsten diselenide, the orbital angular momentum identifies which of two different valleys an electron occupies. The messages that the electrons send out can show researchers not only which valley the electron was in but also what the landscape of that valley looks like and how far apart the valleys are, which are the key elements needed to design new semiconductor-based quantum devices.

For instance, when the team used the laser to push electrons up the side of one valley until they fell into the other, the electrons emitted light at that drop point, too. That light gives clues about the depths of the valleys and the height of the ridge between them. With this kind of information, researchers can figure out how the material would fare for a variety of purposes.

The paper is titled, Super-resolution lightwave tomography of electronic bands in quantum materials. This research was funded by the Army Research Office, German Research Foundation and U-M College of Engineering Blue Sky Research Program.

More information:

The rest is here:
Mapping quantum structures with light to unlock their capabilities - University of Michigan News

What is machine learning? Here’s what you need to know – Business Insider – Business Insider

Machine learning is a fast-growing and successful branch of artificial intelligence. In essence, machine learning is the process of allowing a computer system to teach itself how to perform complex tasks by analyzing large sets of data, rather than being explicitly programmed with a particular algorithm or solution.

In this way, machine learning enables a computer to learn how to perform a task on its own and to continue to optimize its approach over time, without direct human input.

In other words, it's the computer that is creating the algorithm, not the programmers, and often these algorithms are sufficiently complicated that programmers can't explain how the computer is solving the problem. Humans can't trace the computer's logic from beginning to end; they can only determine if it's finding the right solution to the assigned problem, which is output as a "prediction."

There are several different approaches to training expert systems that rely on machine learning, specifically "deep" learning that functions through the processing of computational nodes. Here are the most common forms:

Supervised learning is a model in which computers are given data that has already been structured by humans. For example, computers can learn from databases and spreadsheets in which the data has already been organized, such as financial data or geographic observations recorded by satellites.

Unsupervised learning uses databases that are mostly or entirely unstructured. This is common in situations where the data is collected in a way that humans can't easily organize or structure it. A common example of unstructured learning is spam detection, in which a computer is given access to enormous quantities of emails and it learns on its own to distinguish between wanted and unwanted mail.

Reinforcement learning is when humans monitor the output of the computer system and help guide it toward the optimal solution through trial and error. One way to visualize reinforcement learning is to view the algorithm as being "rewarded" for achieving the best outcome, which helps it determine how to interpret its data more accurately.

The field of machine learning is very active right now, with many common applications in business, academia, and industry. Here are a few representative examples:

Recommendation engines use machine learning to learn from previous choices people have made. For example, machine learning is commonly used in software like video streaming services to suggest movies or TV shows that users might want to watch based on previous viewing choices, as well as "you might also like" recommendations on retail sites.

Banks and insurance companies rely on machine learning to detect and prevent fraud through subtle signals of strange behavior and unexpected transactions. Traditional methods for flagging suspicious activity are usually very rigid and rules-based, which can miss new and unexpected patterns, while also overwhelming investigators with false positives. Machine learning algorithms can be trained with real-world fraud data, allowing the system to classify suspicious fraud cases far more accurately.

Inventory optimization a part of the retail workflow is increasingly performed by systems trained with machine learning. Machine learning systems can analyze vast quantities of sales and inventory data to find patterns that elude human inventory planners. These computer systems can make more accurate probability forecasting for customer demand.

Machine automation increasingly relies on machine learning. For example, self-driving car technology is deeply indebted to machine learning algorithms for the ability to detect objects on the road, classify those objects, and make accurate predictions about their potential movement and behavior.

Read more:
What is machine learning? Here's what you need to know - Business Insider - Business Insider

What are the roles of artificial intelligence and machine learning in GNSS positioning? – Inside GNSS

For decades, artificial intelligence and machine learning have advanced at a rapid pace. Today, there are many ways artificial intelligence and machine learning are used behind the scenes to impact our everyday lives, such as social media, shopping recommendations, email spam detection, speech recognition, self-driving cars, UAVs, and so on.

The simulation of human intelligence is programmed to think like humans and mimic our actions to achieve a specific goal. In our own field, machine learning has also changed the ways to solve navigation problems and taken on a significant role in advancing PNT technologies in the future.

LI-TA HSU, HONG KONG POLYTECHNIC UNIVERSITY

Q: Can machine learning replace conventional GNSS positioning techniques?

Actually, it makes no sense to use ML when the exact physics/mathematical models of GNSS positioning are known, and when using machine learning (ML) techniques over any appreciable area to collect extensive data and train the network to estimate receiver locations would be an impractically large undertaking. We, human beings, designed the satellite navigation systems based on the laws of physics discovered. For example, we use Keplers laws to model the position of satellites in an orbit. We use the spread-spectrum technique to model the satellite signal allowing us to acquire very weak signals transmitted from the medium-Earth orbits. We understand the Doppler effect and design tracking loops to track the signal and decode the navigation message. We finally make use of trilateration to model the positioning and use the least square to estimate the location of the receiver. By the efforts of GNSS scientists and engineers for the past several decades, GNSS can now achieve centimeter-level positioning. The problem is; if everything is so perfect, why dont we have a perfect GNSS positioning?

The answer for me as an ML specialist is that the assumptions made are not always valid in all contexts and applications! In trilateration, we assume the satellite signal always transmitted in direct line-of-sight (LOS). However, different layers in the atmosphere can diffract the signal. Luckily, remote-sensing scientists studied the troposphere and ionosphere and came up with sophisticated models to mitigate the ranging error caused by transmission delay. But the multipath effects and non-line-of-sight (NLOS) receptions caused by buildings and obstacles on the ground are much harder to deal with due to their high nonlinearity and complexity.

Q: What are the challenges of GNSS and how can machine learning help with it?

GNSS performs very differently under different contexts. Context means what and where. For example, a pedestrian walks in an urban canyon or a pedestrian sits in a car that drives in a highway. The notorious multipath and NLOS play major roles to affect the performance GNSS receiver under different context. If we follow the same logic of the ionospheric research to deal with the multipath effect, we need to study 3D building models which is the main cause of the reflections. Extracting from our previous research, the right of Figure 1 is simulated based on the LOD1 building model and single-reflected ray-tracing algorithm. It reveals the positioning error caused by the multipath and NLOS is highly site-dependent. In other words, the nonlinearity and complexity of multipath and NLOS are very high.

Generally speaking, ML derives a model based on data. What exactly does ML do best?

Phenomena we simply do not know how to model by explicit laws of physics/math, for example, contexts and semantics.

Phenomena with high complexity, time variance and nonlinearity.

Looking at the challenges of GNSS multipath and the potential of ML, it becomes straightforward to apply artificial intelligence to mitigate multipath and NLOS. One mainstream idea is to use ML to train the models to classify LOS, multipath and NLOS measurements. This idea is illustrated in Figure 2. Three-steps, data labeling, classifier training, and classifier evaluation, are required. In fact, there are also challenges in each step.

Are we confident in our labeling?

In our work, we use 3D city models and ray-tracing simulation to label the measurements we received from the GNSS receiver. The label may not be 100% correct since the 3D models are not conclusive enough to represent the real world. Trees and dynamic objects (vehicles and pedestrians) are not included. In addition, the multiple reflected signals are very hard to trace and the 3D models could have errors.

What are the classes and features?

For the classes, popular selections are the presence (binary) of multipath or NLOS and their associated pseudorange errors. The features are selected based on the variables that are affected by multipath, including carrier-to-noise ratio, pseudorange residual, DOP, etc. If we can assess a step deeper into the correlator, the shape of correlators in code and carrier are also excellent features. Our study evaluates the comparison between the different levels (correlator, RINEX, and NMEA) of features for the GNSS classifier and reveals that the rawer the feature it is, the better classification accuracy can be obtained. Finally, the methods of exploratory data analysis, such as principle component analysis, can better select the features that are more representative to the class.

Are we confident that the data we used to train the classifier are representative enough for the general application cases?

Overfitting of the data is always being a challenge for ML. Multipath and NLOS effects are very difficult in different cities. For example, the architectures in Europe and Asia are very different, producing different multipath effects. Classifiers trained using the data in Hong Kong do not necessarily perform well in London. The categorization of cities or urban areas in terms of their effects on GNSS multipath and NLOS is still an open question.

Q: What are the challenges of integrated navigation systems and how can machine learning can help with them?

Seamless positioning has always been the ultimate goal. However, each sensor has a different performance in different areas. Table 1 gives a rough picture. Inertial sensors seem to perform stably in most areas. But the MEMS-INS suffers from drift and is highly affected by the random noise caused by the temperature variations. Naturally, integrated navigation is a solution. The sensor integration, in fact, shall be regarded in both long-term and short-term.

Long-term Sensor SelectionIn the long term, available sensors for positioning are generally more than enough. The determination of the best subsets of sensors to integrate is the question to ask. Consider an example of seamless positioning for a city dweller travelling from home to the office:

Walking on a street to the subway station (GNSS+IMU)

Walking in a subway station (Wi-Fi/BLE+IMU)

Traveling on a subway (IMU)

Walking in an urban area to the office (VPS+ GNSS+ Wi-Fi/BLE+IMU)

This example clearly shows that seamless positioning should integrate different sensors. The selection of the sensors can be done heuristically or by maximizing the observability of sensors. If the sensors are selected heuristically, we must have the ability to know what context the system is operating under. This is one of the best angles for ML to cut in. In fact, the classification of the scenarios or contexts is exactly what ML does best. A recently published journal paper demonstrates how to detect different contexts using smartphone sensors for context-adaptive navigation (Gao and Groves 2020). Sensors in smartphones are used in the models trained by supervised ML to determine not only the environment but also the behavior (such as transportation modes, including static, pedestrian walk, and sitting on a car or a subway, etc.).

According to their result, the state-of-the-art detection algorithm can achieve over 95% for pedestrians under indoor, intermediate, and outdoor scenarios. This finding encourages the use of ML to intelligently select the right navigation systems for an integrated navigation system under different areas. The same methodology can be easily extended to vehicular applications with a proper modification in the selections of features, classes, and machine learning algorithms.

Short-term Sensor Weighting

Technically speaking, an optimal integrated solution can be obtained if the uncertainty of the sensor can be optimally described. Presumably, the sensors uncertainty remains unchanged under a certain environment. As a result, most of the sensors uncertainty is carefully calibrated before its use in integration systems.

However, the problem is that the environment could change rapidly within a short period of time. For example, a car drives in an urban area with several viaducts or a car drives in an open sky with a canopy of foliage. These scenarios affect the performance of GNSS greatly, however, the affecting periods were too short to exclude the GNSS from the subset of sensors used. The best solution against these unexpected and transient effects are de-weighting the affected sensors in the system.

Due to the complexity of these effects, adaptive tuning of the uncertainty based on ML is getting popular. Our team demonstrated this potential by an experiment of a loosely coupled GNSS/INS integration. This experiment took place in an urban canyon with commercial GNSS and MEMS INS. Different ML algorithms are used to classify the GNSS positioning errors into four classes: healthy, slightly shifted, inaccurate, and dangerous. These are represented as 1 to 4 in the bottom of Figure 4. The top and bottom of the figure show the error of the commercial GNSS solution and the predicted classes by different ML. It clearly shows that ML can do a very good job predicting the class of the GNSS solution, enabling the integrated to allocate proper weighting to GNSS. Table 2 shows the improvement made by the ML-aided integration system.

This is just an example to preliminarily show the potential of ML in estimating/predicting sensors uncertainty. The methodology can also be applied to different sensor integration such as Wi-Fi/BLE/IMU integration. The challenge of the trained classifier may be too specific for a certain area due to the over-fitting of the data. This remains an open research question in the field.

Q: Machine Learning or Deep Learning for Navigation Systems?

Based on research in object recognition in computer science, deep learning (DL) is the currently the mainstream method because it generally outperforms ML when two conditions are fulfilled, data and computation. The trained model of DL is completely data-driven, while ML trains models to fit assumed (known) mathematical models. A rule of thumb to select ML or DL is the availability of the data in hand. If extensive and conclusive data are available, DL achieves excellent performance due to its superiority in data fitting. In the other words, DL can automatically discover features that affect the classes. However, a model trained by ML is much more comprehensible compared to that trained by DL. The DL model becomes like a black box. In addition, the nodes and layers of convolution in DL are used to extract features. The selection of the number of layers and the number of nodes is still very hard to determine, so that in trial-and-error approaches are widely adopted. These are the major challenges in DL.

If a DL-trained neutral network can be perfectly designed for the integrated navigation system, then it should consider both long-term and short-term challenges. Figure 5 shows this idea. Several hidden layers will be designed to predict the environments (or contexts) and the others are to predict the sensor uncertainty. The idea is straightforward, whereas the challenges remain:

Are we confident that the data we used to train the classifier are representative enough for the general applications cases?

What are the classes?

What are the features?

How many layers and the number of nodes should be used?

Q: How does machine learning affect the field of navigation?

ML will accelerate the development of seamless positioning. With the presence of ML in the navigation field, a perfect INS is no longer the only solution. These AI technologies facilitate the selection of the appropriate sensors or raw measurements (with appropriate trust) against complex navigation challenges. The transient selection of the sensors (well-known as plug-and-play) will affect the integration algorithm. Integration R&D engineers in navigation have been working on the Kalman filter and its variants. However, the flexibility of the Kalman filter makes it hard to accommodate the plug-and-play of sensors. The graph optimization that is widely used in the robotics field could be a very strong candidate to integrate sensors for navigation purposes.

Other than GNSS and the integrated navigation system mentioned above, the recently developed visual positioning system (VPS) by Google could replace the visual corner point detection by the semantic information that detected by ML. Looking at how we navigated before GNSS, we compare visual landmarks with our memory (database) to infer where we are and where we are heading. ML can segment and classify images taken by a camera into different classes, including building, foliage, road, curb, etc., and compare the distribution of the semantic information with that in the database in the cloud server. If they match, the associated position and orientation tag in the database can be regarded as the user location.

AI technologies are coming. They will influence navigation research and development. In my opinion, the best we can do is to mobilize AI to tackle the challenges to which we currently lack solutions. It is highly probable that technology advances and learning focus will depend greatly on MLs development and achievement in the field of navigation.

References

(1) Groves PD, Challenges of Integrated Navigation, ION GNSS+ 2018, Miami, Florida, pp. 3237-3264.

(2) Gao H, Groves PD. (2020) Improving environment detection by behavior association for context-adaptive navigation. NAVIGATION, 67:4360. https://doi.org/10.1002/navi.349

(3) Sun R., Hsu L.T., Xue D., Zhang G., Washington Y.O., (2019) GPS Signal Reception Classification Using Adaptive Neuro-Fuzzy Inference System, Journal of Navigation, 72(3): 685-701.

(4) Hsu L.T. GNSS Multipath Detection Using a Machine Learning Approach, IEEE ITSC 2017, Yokohama, Japan.

(5) Yozevitch R., and Moshe BB. (2015) A robust shadow matching algorithm for GNSS positioning. NAVIGATION, 62.2: 95-109.

(6) Chen P.Y., Chen H., Tsai M.H., Kuo H.K., Tsai Y.M., Chiou T.Y., Jau P.H. Performance of Machine Learning Models in Determining the GNSS Position Usage for a Loosely Coupled GNSS/IMU System, ION GNSS+ 2020, virtually, September 21-25, 2020.

(7) Suzuki T., Nakano, Y., Amano, Y. NLOS Multipath Detection by Using Machine Learning in Urban Environments, ION GNSS+ 2017, Portland, Oregon, pp. 3958-3967.

(8) Xu B., Jia Q., Luo Y., Hsu L.T. (2019) Intelligent GPS L1 LOS/Multipath/NLOS Classifiers Based on Correlator-, RINEX-and NMEA-Level Measurements, Remote Sensing 11(16):1851.

(9) Chiu H.P., Zhou X., Carlone L., Dellaert F., Samarasekera S., and Kumar R., Constrained Optimal Selection for Multi-Sensor Robot Navigation Using Plug-and-Play Factor Graphs, IEEE ICRA 2014, Hong Kong, China.

(10) Zhang G., Hsu L.T. (2018) Intelligent GNSS/INS Integrated Navigation System for a Commercial UAV Flight Control System, Aerospace Science and Technology, 80:368-380.

(11) Kumar R., Samarasekera S., Chiu H.P., Trinh N., Dellaert F., Williams S., Kaess M., Leonard J., Plug-and-Play Navigation Algorithms Using Factor Graphs, Joint Navigation Conference (JNC), 2012.

Excerpt from:
What are the roles of artificial intelligence and machine learning in GNSS positioning? - Inside GNSS

PathAI Present Machine Learning Models that Predict the Homologous Recombination Deficiency Status of Breast Cancer Biopsies at the 2020 SABCS – PR…

BOSTON (PRWEB) December 09, 2020

PathAI, a global provider of AI-powered technology applied to pathology research, today announced the result of a proof-of-concept investigation into ML model prediction of HRD directly from H&E-stained biopsy slides. DNA damage repair pathways, such as homologous recombination, have essential roles in healthy cells, and mutations in these pathways are closely associated with an increased risk for cancer, as well as cancer progression. HRD results from mutations in BRCA1/2, as well as other genes that encode the homologous recombination components that are responsible for error-free repair of double-strand breaks in DNA. HRD tumors are sensitive to poly-ADP ribose polymerase (PARP) inhibitors and platinum-based chemotherapy, making determination of a patients tumor HRD status clinically important. Genomic sequencing is currently the gold standard to classify a tumor as HRD or homologous recombination proficient, but this method has a high error rate, leaving a great unmet need to develop robust and reliable HRD scoring tools.

Identifying the underlying molecular drivers of cancer has tremendous significance not only for our fundamental understanding of the disease biology, but because these image-based assays may also play an important role in making patient treatment decisions in the future, like choosing the most effective therapeutic, said PathAI co-founder and Chief Executive Officer Andy Beck MD, PhD. Our ability to find these signatures in widely available H&E images suggests that our models could have a great impact, and we look forward to investigating this further and validating these results in future studies.

PathAI used two different approaches to predict the HRD status of a tumor from the H&E-stained tissue biopsy. Models were trained using breast cancer tumor biopsy images from TCGA and HRD scores of these same biopsies generated by Knijnenburg and colleagues (published in Cell Reports. 2018. 23:239-254). The Human Interpretable Features (HIF)-based model was trained using thousands of expert pathologist annotations of cell- and tissue-level features of the TCGA images to predict HRD status from HIF-based correlations, whereas the end-to-end model learned to predict HRD status directly from the biopsy image.

Both models predicted HRD with high accuracy, with the HIF-based model having an AUROC of 0.87, and the end-to-end model a AUROC of 0.80. The HIF-based model also revealed that HRD tumors have greater degree of necrosis and also more lymphocytes within the tumor itself than homologous recombination proficient tumors. These results show the enormous potential for digital pathology to identify clinically-significant genomic phenotypes that could not be detected using traditional pathology methods. PathAI will continue to develop and validate these important models for future clinical application.

About PathAIPathAI is a leading provider of AI-powered research tools and services for pathology. PathAIs platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit https://www.pathai.com.

Share article on social media or email:

Go here to read the rest:
PathAI Present Machine Learning Models that Predict the Homologous Recombination Deficiency Status of Breast Cancer Biopsies at the 2020 SABCS - PR...

Apple’s SVP of Machine Learning & AI John Giannandrea has been assigned to Oversee Apple’s Secretive ‘Project Titan’ – Patently Apple

Patently Apple has been covering the latest Project Titan patents for years, including a granted patent report posted this morning covering another side of LiDAR that was never covered before. While some in the industry have doubted Apple will ever do anything with this project, Apple has now reportedly moved its self-driving car unit under the leadership of top artificial intelligence executive John Giannandrea, who will oversee the companys continued work on an autonomous system that could eventually be used in its own car.

Bloomberg's Mark Gurman is reporting today that Project Titan is run day-to-day by Doug Field. His team of hundreds of engineers have moved to Giannandreas artificial intelligence and machine-learning group, according to people familiar with the change.

Previously, Field reported to Bob Mansfield, Apples former senior vice president of hardware engineering. Mansfield has now fully retired from Apple, leading to Giannandrea taking over. Mansfield oversaw a shift from the development of a car to just the underlying autonomous system.

In 2017, Patently Apple posted a report titled "Apple's CEO Confirms Project Titan is the 'Mother of all AI Projects' Focused on Self-Driving Vehicles." For more read the full Bloomberg report.

Like with all major Apple projects, be it for a head-mounted display device, smartglasses, folding devices, Apple keeps its secrets and prototypes under wraps until they've holistically worked out their roadmap.

That's why following Apple's patents is the best way to keep on top of the technology that Apple's engineers are actually working on in some capacity within the various ongoing projects. Review our Project Titan patent archive to see what Apple has been working on.

See the original post:
Apple's SVP of Machine Learning & AI John Giannandrea has been assigned to Oversee Apple's Secretive 'Project Titan' - Patently Apple