The Connection Between Astrology And Your Tesla AutoDrive – Forbes

Preamble: Intermittently, I will be introducing some columns which introduce some seemingly outlandish concepts. The purpose is a bit of humor, but also to provoke some thought. Enjoy.

Zodiac signs inside of horoscope circle.

Historically, astrology has been a major component of the cultural life in many major civilizations. Significant events such as marriage, moving into a new home, or even travel were planned with astrology in mind. Even in modern times, astrological internet sites enjoy great success and the gurus of the art publish in major newspapers.

Of course, with the advent of scientific methods and formal education, astrology has rapidly lost favor in intellectual society. After all, what could possibly be the causal relationship between the movement of planets and whether someone will get a job promotion? As some have pointed out, even if there was a relationship, the configuration of the stars change, so how could the predictions of the past possibly be valid ?

Pure poppycock. Right? Perhaps. Lets take a deeper look.

Lets consider the central technology at the apex of current intellectual achievement : machine learning. Machine learning is the engine underlying important technologies such as autonomous vehicles including Teslas AutoDrive. What is machine learning at its core? One looks at massive amounts of data and trains a computational engine (ML engine). This ML engine is then used to make future predictions. Sometimes, the training is done in a constrained manner where one looks at particular items, and other times, the training is left unconstrained. Machine learning and the associated field of Artificial Intelligence (AI) is at the forefront of computer science research. Indeed, as we have discussed in past articles, AI is considered to be the next big economic mega-driver in a vast number of markets. After looking at machine learning, an interesting thought comes to mind.

Was astrology really just machine learning done by humans?

Could the thought leaders from great civilizations have looked at large amounts of human behavioral data and used something very reasonable (planetary movements) to train the astrology engine? After all, what really is the difference between machine learning and astrology?

Marketing Chart Comparing Astrology and Machine Learning

Both astrology and machine learning seem to have a concept of training. In astrology, the astrological signs are used as points of interest, and seemingly arbitrary connections are made to individual human circumstances. Even without the understanding of causality, the correlations can be somewhat true. In machine learning, data correlations are discovered, and there is no requirement of causation. This thought process is central to the machine learning paradigm, and gives it much of its power. In fact, as the chart above shows, there are uncomfortable levels of parallels between astrology and machine learning.

What does this mean? Should we take machine learning a little less seriously? Certainly, some caution is warranted, but it appears to be clear that machine learning can provide utility.

So, what about astrology? Perhaps we should take it a bit more seriously .

If you enjoyed this article, you may also enjoy A Better Transportation Option Than A Tesla.

Read the original post:
The Connection Between Astrology And Your Tesla AutoDrive - Forbes

Think your smartwatch is good for warning of a heart attack? Turns out it’s surprisingly easy to fool its AI – The Register

Neural networks that analyse electrocardiograms can be easily fooled, mistaking your normal heartbeat reading as irregular or vice versa, researchers warn in a paper published in Nature Medicine.

ECG sensors are becoming more widespread, embedded in wearable devices like smartwatches, while machine learning software is being increasingly developed to automatically monitor and process data to tell users about their heartbeats. The US Food and Drug Administration approved 23 algorithms for medical use in 2018 alone.

However, the technology isnt foolproof. Like all deep learning models, ECG ones are susceptible to adversarial attacks: miscreants can force algorithms to misclassify the data by manipulating it with noise.

A group of researchers led by New York University demonstrated this by tampering with a deep convolutional neural network (CNN). First, they obtained a dataset containing 8,528 ECG recordings labelled into four groups: Normal, atrial fibrillation - the most common type of an irregular heartbeat - other, or noise.

The majority of the dataset, some 5,076 samples were considered normal, 758 fell into the atrial fibrillation category, 2,415 classified as other, and 279 as noise. The researchers split the dataset and used 90 per cent of it to train the CNN, and the other 10 per cent to test the system.

Deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye, the researchers explained in the paper (Here's the free preprint version of the paper on arXiv.)

To create these adversarial examples, the researchers added a small amount of noise to samples used in the test set. The uniform peaks and troughs in ECG reading may appear innocuous and normal to the human eye, but adding a small interference was enough to trick the CNN into classifying them as atrial fibrillation - an irregular heartbeat linked to heart palpitations and an increased risk of strokes.

Here are two adversarial examples. The first one shows how an irregular atrial fibrillation (AF) reading being misclassified as normal. The second one is a normal reading misclassified as irregular. Image Credit: Tian et al. and Nature Medicine.

When the researchers fed the adversarial examples to the CNN, 74 per cent of the readings that were originally correctly classified were subsequently wrong. In other words, the model mistook 74 per cent of the readings by assigning them to incorrect labels. What was originally a normal reading then seemed irregular, and vice versa.

Luckily, humans are much more difficult to trick. Two clinicians were given pairs of readings - an original, unperturbed sample and its corresponding adversarial example and asked if either of them looked like they belonged to a different class. They only thought 1.4 per cent of the readings should have been labelled differently.

The heartbeat patterns in original and adversarial samples looked similar to the human eye, and, therefore, itd be fairly easy to tell if a normal heartbeat had been incorrectly misclassified as irregular. In fact, both experts were able to tell the original reading from the adversarial one about 62 per cent of the time.

The ability to create adversarial examples is an important issue, with future implications including robustness to the environmental noise of medical devices that rely on ECG interpretation - for example, pacemakers and defibrillators - the skewing of data to alter insurance claims and the introduction of intentional bias into clinical trial, the paper said.

Its unclear how realistic these adversarial attacks truly are in the real world, however. In these experiments, the researchers had full access to the model making it easy to attack but its much more difficult for these types of attacks to work on, say, someones Apple Watch, for example.

The Register has contacted the researchers for comment. But what the research does prove, however, is that relying solely on machines may be unreliable and that specialists really ought to double check results when neural networks are used in clinical settings.

In conclusion, with this work, we do not intend to cast a shadow on the utility of deep learning for ECG analysis, which undoubtedly will be useful to handle the volumes of physiological signals requiring processing in the near future, the researchers wrote.

This work should, instead, serve as an additional reminder that machine learning systems deployed in the wild should be designed with safety and reliability in mind, with a particular focus on training data curation and provable guarantees on performance.

Sponsored: Quit your addiction to storage

Read more:
Think your smartwatch is good for warning of a heart attack? Turns out it's surprisingly easy to fool its AI - The Register

Tip: Machine learning solutions for journalists | Tip of the day – Journalism.co.uk

Much has been said about what artificial intelligence and machine learning can do for journalism: from understanding human ethics to predicting when readers are about to cancel their subscriptions.

Want to get hands on with machine learning? Quartz investigative editor John Keefe provides 15 video lessons taken from the 'Hands-on Machine Learning Solutions for Journalists' online class he lead through the Knight Center for Journalism in the Americas. It covers all the techniques that the Quartz investigative team and AI studio commonly use in their journalism.

"Machine learning is particularly good at finding patterns and that can be useful to you when you're trying to search through text documents or lots of images," Keefe explained in the introduction video.

Want to learn more about using artificial intelligence in your newsroom? Join us on the 4 June 2020 at our digital journalism conference Newsrewired at MediaCityUK, which will feature a workshop on implementing artificial intelligence into everyday journalistic work. Visit newsrewired.com for the full agenda and tickets

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

Visit link:
Tip: Machine learning solutions for journalists | Tip of the day - Journalism.co.uk

Chilmark Research: The Promise of AI & ML in Healthcare Report – HIT Consultant

What You Need to Know:

New Chilmark Research report reveals artificial intelligence and machine learning (AI/ML) technologies are capturing the imagination of investors and healthcare organizationsand are poised to expand healthcare frontiers.

The latest report evaluates over 120 commercial AI/ML solutions in healthcare, explores future opportunities, and assesses obstacles to adoption at scale.

Interest and investment in healthcare AI/ML toolsis booming with approximately $4B in capital funding pouring into thishealthcare sector in 2019. Such investment is spurring a vast array of AI/MLtools for providers, patients, and payers accelerating the possibilities fornew solutions to improve diagnostic accuracy, improve feedback mechanisms, andreduce clinical and administrative errors, according to Chilmark Researchs last report.

The Promise of AI & ML in Healthcare ReportBackground

The report,The Promise of AI & ML in Healthcare, is the most comprehensive report published on this rapidly evolving market with nearly 120 vendors profiled. The report explores opportunities, trends, and the rapidly evolving landscape for vendors, tracing the evolution from early AI/ML use in medical imaging to todays rich array of vendor solutions in medical imaging, business operations, clinical decision support, research and drug development, patient-facing applications, and more. The report also reviews types and applications of AI/ML, explores the substantial challenges of health data collection and use, and considers issues of bias in algorithms, ethical and governance considerations, cybersecurity, and broader implications for business.

Health IT vendors, new start-up ventures, providers, payers,and pharma firms now offer (or are developing) a wide range of solutions for anequally wide range of industry challenges. Our extensive research for thisreport found that nearly 120 companies now offer AI-based healthcare solutionsin four main categories: hospital operations, clinical support, research anddrug development, and patient/consumer engagement.

Report Key Themes

This report features an overview of these major areas of AI/ML use in healthcare. Solutions for hospital operations include tools for revenue cycle management, applications to detect fraud detection and ensure payment integrity, administrative and supply chain applications to improve hospital operations, and algorithms to boost patient safety. Population health management is an area ripe in AI/ML innovation, with predictive analytics solutions devoted to risk stratification, care management, and patient engagement.

A significant development is underway in AI/ML solutions for clinical decision support, including NLP- and voice-enabled clinical documentation applications, sophisticated AI-based medical imaging and pathology tools, and electronic health records management tools to mitigate provider burnout. AI/ML-enabled tools are optimizing research and drug development by improving clinical trials and patient monitoring, modeling drug simulations, and enabling precision medicine advancement. A wealth of consumer-facing AI/ML applications, such as chatbots, wearables, and symptom checkers, are available and in development.

Provider organizations will find this report offers deep insight into current and forthcoming solutions that can help support business operations, population health management, and clinical decision support. Current and prospective vendors of AI/ML solutions and their investors will find this reports overview of the current market valuable in mapping their own product strategy. Researchers and drug developers will benefit from the discussion of current AI/ML applications and future possibilities in precision medicine, clinical trials, drug discovery, and basic research. Providers and patient advocates will gain valuable insight into patient-facing tools currently available and in development.

All stakeholders in healthcare technologyproviders, payers, pharmaceutical stakeholders, consultants, investors, patient advocates, and government representativeswill benefit from a thorough overview of current offerings as well as thoughtful discussions of bias in data collection and underlying algorithms, cyber-security, governance, and ethical concerns.

For more information about the report, please visit https://www.chilmarkresearch.com/chilmark_report/the-promise-of-ai-and-ml-in-healthcare-opportunities-challenges-and-vendor-landscape/

Read this article:
Chilmark Research: The Promise of AI & ML in Healthcare Report - HIT Consultant

Differentiating Boys with ADHD from Those with Typical Development Bas | NDT – Dove Medical Press

Yunkai Sun,1,2,* Lei Zhao,1,2,* Zhihui Lan,1,2 Xi-Ze Jia,1,2 Shao-Wei Xue1,2

1Center for Cognition and Brain Disorders, Institute of Psychological Sciences and the Affiliated Hospital, Hangzhou Normal University, Hangzhou 311121, Peoples Republic of China; 2Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou 311121, Peoples Republic of China

*These authors contributed equally to this work

Correspondence: Shao-Wei XueCenter for Cognition and Brain Disorders, Hangzhou Normal University, No. 2318, Yuhangtang Road, Hangzhou, Zhejiang 311121, Peoples Republic of ChinaTel/Fax +86-571-28867717Email xuedrm@126.com

Purpose: In recent years, machine learning techniques have received increasing attention as a promising approach to differentiating patients from healthy subjects. Therefore, some resting-state functional magnetic resonance neuroimaging (R-fMRI) studies have used interregional functional connections as discriminative features. The aim of this study was to investigate ADHD-related spatially distributed discriminative features derived from whole-brain resting-state functional connectivity patterns using machine learning.Patients and Methods: We measured the interregional functional connections of the R-fMRI data from 40 ADHD patients and 28 matched typically developing controls. Machine learning was used to discriminate ADHD patients from controls. Classification performance was assessed by permutation tests.Results: The results from the model with the highest classification accuracy showed that 85.3% of participants were correctly identified using leave-one-out cross-validation (LOOV) with support vector machine (SVM). The majority of the most discriminative functional connections were located within or between the cerebellum, default mode network (DMN) and frontoparietal regions. Approximately half of the most discriminative connections were associated with the cerebellum. The cerebellum, right superior orbitofrontal cortex, left olfactory cortex, left gyrus rectus, right superior temporal pole, right calcarine gyrus and bilateral inferior occipital cortex showed the highest discriminative power in classification. Regarding the brainbehaviour relationships, some functional connections between the cerebellum and DMN regions were significantly correlated with behavioural symptoms in ADHD (P < 0.05).Conclusion: This study indicated that whole-brain resting-state functional connections might provide potential neuroimaging-based information for clinically assisting the diagnosis of ADHD.

Keywords: attention deficit hyperactivity disorder, ADHD, resting-state fMRI, R-fMRI, machine learning approach, support vector machine, SVM, leave-one-out cross-validation

This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License.By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.

Here is the original post:
Differentiating Boys with ADHD from Those with Typical Development Bas | NDT - Dove Medical Press

Secret document says WikiLeaks cable leaks disrupted tracking of nation-state hackers – TechCrunch

A previously secret document from 2010 warned that classified diplomatic cables published by WikiLeaks would likely result in observable changes in the tactics and techniques used by foreign spies, potentially making it easier to avoid detection by U.S. agencies.

The document, recently declassified through a Freedom of Information request by the nonprofit National Security Archive and shared with TechCrunch, reveals a rare glimpse inside U.S. Cyber Command, the militarys main cyber-warfare unit, which feared that the leaked diplomatic cables of communications between U.S. foreign embassies would uncover and hamper its ongoing cyber operations.

Michael Martelle, a research fellow for the National Security Archives Cyber Vault Project, said the subsequent publication of the cables by WikiLeaks gave the adversaries a period of heightened advantage.

The publication of the document comes almost exactly a decade after U.S. Army intelligence analyst Chelsea Manning downloaded and forwarded 750,000 classified cables to leak-publishing site WikiLeaks. Manning was subsequently sentenced to 35 years in prison for what was then the largest leak of U.S. classified material in its history. Her sentence was commuted by then-President Barack Obama in 2017.

Cyber Command wrote its findings in a so-called situational awareness report dated December 2010, just days after The New York Times and several other news outlets published the full cache of diplomatic cables, albeit with redactions to protect sources. The highly redacted assessment warned that the military cyber unit expected to see foreign intelligence services active in cyber-espionage against the U.S. use the information published by WikiLeaks to their own advantage.

(Image: National Security Archive)

According to the assessment, the leaked cables clearly state that the U.S. government entities at the time have knowledge of specific tactics and techniques used by foreign adversaries, including malware, toolsets, IP addresses, and domains used in intrusion activity.

It went on to warn that those same adversaries are expected to modify their current infrastructure and intrusion techniques to evade U.S. cyber-defenses.

(Image: National Security Archive)

Although the redactions in the declassified document make it unclear exactly which adversaries Cyber Command was referring to, Martelle said that only one specific adversary China was mentioned in the entire cache of unredacted documents, which WikiLeaks published a year later, much to the chagrin of the news outlets.

Just one month before the first cables were published, Google had publicly accused Beijing of launching targeted cyberattacks against its network. Several other companies, including antivirus maker Symantec and defense contractor Northrop Grumman, were also hit by the attacks, in an offensive cyber campaign that became known as Operation Aurora.

Google subsequently withdrew from China following the furor.

Cyber Commands assessment said that all Dept. of Defense divisions and U.S. intelligence agencies remain vigilant to anomalies amid fears that its adversaries will leverage this new information to further their cyber initiatives.

When reached, a spokesperson for Cyber Command did not comment. Google also did not comment. An email to WikiLeaks went unreturned. WikiLeaks founder Julian Assange is currently detained and awaiting extradition to the U.S. for publishing the classified cables.

See the article here:
Secret document says WikiLeaks cable leaks disrupted tracking of nation-state hackers - TechCrunch

Jury fails to decide whether former CIA engineer leaked secrets in WikiLeaks case | TheHill – The Hill

A Manhattan federal judge declared a mistrial Monday after a jury was unable to decide whether to convict computer engineer Joshua Schulte on charges of leaking CIA materials to WikiLeaks.

While the jury was hung on eight counts, including illegal gathering and transmission of national defense information, jurors convicted him on charges of contempt of court and making false statements to the FBI, The New York Times reported.

The jury reportedly deliberated for six days, with one juror replaced after researching the case against the judges orders. She was never replaced, leaving 11 people to determine the final verdict.

Jurors also raised concerns about a second jurors attitude in a note, saying she was not participating in discussions, the Times reported.

Prosecutors said Schulte, who resigned in November 2016, was motivated by resentment over his belief that the agency was disregarding his workplace complaints.

The government may retry Schulte, who also faces a separate federal trial over thousands of images and videos of child pornography allegedly discovered on electronic devices during a search of his home.

Schultes attorneys argued the vulnerabilities in the CIAs computer network were widely known, and that it could have been breached by other sources, pointing in particular to a CIA employee identified as Michael who was close friends with Schulte and left the office with him on the night of the alleged theft.

The government placed Michael on administrative leave for refusing to cooperate with the investigation, but it did not notify Schultes defense ofthe action until six months later beforeMichael served as a witness for the government, according to the Times.

It shows their doubt about the case against Mr. Schulte, Schultes lawyer, Sabrina Shroff, said in her closing argument.

See the original post:
Jury fails to decide whether former CIA engineer leaked secrets in WikiLeaks case | TheHill - The Hill

US, Assange and WikiLeaks – Daily Times

The fragility of the US, the worlds most powerful nation and said to be a model democracy, couldnt bemore stark when one follows its hounding of Julian Assange, the founder of the WikiLeaks, who had the temerity to expose the ugly and brutal side of the workings of US power, for instance, the killings of civilians in Iraq from aerial bombing for no real purpose.

In other words, it was in public interest that such informationshould be made known, which the WikiLeaks agreed to publish on its platform, as well as sharing, most of it, with respected newspapers such asTheGuardian and The New York Times.

While Assange is being hounded for the material published on Wikileaks, the newspapers that published that materialare spared so far because, for some odd reason, they are regarded as practising responsible journalism because they redacted some of the material that might put lives of some Western agents in danger.There is nothing so far to suggest that WikiLeaks jeopardised the lives of any agent/s.

In other words, newspapers like The Guardian and The New York Times were practising journalism, while Assange and his WikiLeaks were not in that business and hence not worthy of public interest journalism. Assange, therefore, was engaged in espionage when publishing the secret information, and hence accountable for the said offence of espionage.

Of course, the trial in the US on espionage charges will happen as and when the judicial process of extradition in the UK is completed, and Assange is found to be liable for proceedings in the US. It is important to note that the extradition between the US and UK excludes political offences.

But the whole process seems to assume that Assanges leaking of the US documents was a criminal offence for which he is liable to face consequences in the US. In other words, it is essentially a political process rolled out as a criminal case, in which Assange is already viewed as having committed alleged act/s of espionage against the US. To put it more bluntly, the UK judicial process seems tailored to hand over Assange to the US over a period, where he is said to be facing multiple life imprisonment of 175 years. It is simply vendetta dressed as justice, with the US proclaiming loudly that no matter what your citizenship status and/or the place of alleged crime, the US would hunt you down. Even though President Barack Obama pardoned Manning for supplying the documents to WikiLeaks, Assange must face the music for daring to reveal the USs ugly and brutal side.

Assange, according to reports, has been strip-searched and repeatedly handcuffed like some violent criminal, prevented from any communication with his legal team, and was thrown into solitary confinement

Even before Assange faces justice in the US after the extradition process in the UK, as and when it is completed, proceedings in the magistrates court pre-judge him as a criminal. Assange, according to reports, has been strip-searched and repeatedly handcuffed like some violent criminal, prevented from any communication with his legal team, and was thrown into solitary confinement and the like. His treatment has been so abysmal that more than 60 British doctors protested at his torture.

Assange is an Australian citizen but his own government being part of the US-led Five Eyes intelligence-sharing compact-whichincludes the US, Britain, Australia, Canada and New Zealand-islargely letting Assange face his destiny with the US justice system when he is delivered from his British nightmare into a US dungeon.

It was reported that a Trump associate/confidant had let it be known that if Assange would declare that his leaking of a trove of Hillary Clinton related emails, towards the close of the presidential election, was not a part of Russian interference to favour Trump, he might be pardoned. The Trump camp has denied any such deal.

One, however, wonders why Assange is such a criminal when at the height of the election campaign, Trump, as a presidential candidate, openly encouraged WikiLeaks to come out with Hillary leaks, saying that he loved WikiLeaks. However sordid the whole WikiLeaks saga is from the beginning the sad thing is the USs model democracy is hounding an individual for exposing the truth behind an image that was illusory.

The writer is a senior journalist and academic based in Sydney, Australia

More:
US, Assange and WikiLeaks - Daily Times

Top 10 Unexpected Future Applications Of Quantum Computers …

Quantum computing is a major trend in computer science. Its jaw-dropping to think that it all started from observing the weird properties of light! There have been several pioneers in quantum computing, the main one being Richard Feynmanhe explained that quantum computers are feasible and that they are the future of computing.

Quantum computers have existed since way before you think. The first quantum computation was carried out in 1997, using NMR on chloroform molecules.[1] Nowadays, weve been trying to slap the quantum buzzword on just about anything. Even then, there are still a few applicationsin the endless list of quantum technologiesthat are really mind-boggling.

Cancer is one of the leading causes of death around the world. In fact, according to a recent survey from the World Health Organization (WHO), respiratory cancers alone claimed 1.7 million lives in 2016. However, if cancer is recognized at an early stage, the chances of recovery through treatment are much higher. There are many ways cancer can be treated. One is to remove it by surgery; another is through radiotherapy.

Beam optimization is critical in radiotherapy, as it is important to make sure that the radiation damages as little healthy cells and tissues near the cancer region as possible. There have been many optimization methods for radiotherapy in the past that use classical computers. In 2015, researchers at the Roswell Park Cancer Institute came up with a new technique that uses quantum annealing computers, like the ones manufactured by D-Wave, to optimize radiotherapy in a manner that is three to four times faster than that of a regular computer [2]

Many of us are familiar with waking up early and setting off for work, only to find a traffic jam waiting on the way. And then comes the terrifying feeling that youre going to be late for work. Google has been working on fixing this problem by monitoring traffic and suggesting alternative routes to its users. However, Volkswagen is taking it to another level with their research.

In a 2017 experiment, Volkswagen tried to tackle the issue of traffic, not through monitoring but rather by optimizing traffic flow itself. They used the Quadratic Unconstraint Binary Optimization (QUBO) technique with quantum annealing computers to find the optimal route for a select number of cars and possible routes in consideration.[3]

So far, they have tested this with 10,000 taxis in Beijing to show how their method can optimize traffic flow significantly faster than a classical computer. However, many people are skeptical of Volkswagens claims, since they used a D-Wave quantum annealing computer to do the processing. Many scientists state that the quantum annealers D-Wave manufactures do not offer a speedup as significant as Volkswagen claims.

We have all been in a spot where the mobile data reception is excessively bad, and wed rather just use that slow WiFi hotspot in that nearby coffee shop. Well, it seems that a company called Booz Allen Hamilton might just have found the solution to the horrible network coverage problem, with the help of quantum computers, of course!

In a 2017 publication, they suggested that optimal satellite coverage is pretty tough to figure out. This is because there are a lot of possible alignment combinations, and it is really hard to check all these combinations with classical computers.

The solution? They suggest that using the QUBO technique, as previously mentioned, with the help of D-Waves quantum annealing computers, can help find the optimal satellite coverage position required.[4] This would not mean that the satellites would be able to cover all the bad reception spots, but the likelihood of being able to find a spot with better reception can be increased significantly.

Molecule simulation has been a crucial field in biology and chemistry, as it helps us understand the structure of molecules and how they interact with each other. But it also helps us discover new molecules.

Although classical computers nowadays may be able to simulate these molecular dynamics, there is a limitation on the complexity of molecules in a given simulation. Quantum computers are able to effectively break this barrier. So far, theyve only been used to simulate small molecules, like beryllium hydride (BeH2), for example. It might not seem like much, but that fact that it was simulated by a seven-qubit chip shows that if we had more qubits at our disposal, we might be able to run extremely complex molecular simulations.[5] This is because the processing power of quantum computers increases exponentially as the number of qubits increase.

Other hardwarelike D-Waves quantum annealing computershas also been used by researchers to come up with simulation methods that might be just as good, if not faster, than current methods.

Some of us might have heard of the scare about quantum computers being able to break cryptosystems such as RSA or DSA. This seems to be true for some cryptosystems, as they rely on prime numbers to generate a key based on prime factors. An algorithm, called Shors algorithm, can be used by quantum computers to find the prime factors used to generate the key, and they can do it much more efficiently.

But what about the other cryptosystems which do not rely on prime numbers to generate keys? There is another algorithm called Grovers algorithm which might be used to brute force a key faster than a classical computer. However, this is not as big of a speedup as Shors algorithm would offer, compared to a classical computer (quadratic vs. exponential speedup). This would mean that we would need significantly faster quantum computers than the ones that currently exist to even attempt to break these cryptosystems.

Even with that, there are some cryptosystems that would be impossible for quantum computers to break. These cryptosystems are categorized within the field of post-quantum cryptography. Overall, though, it would seem that at least RSAwhich is often used in digital signatureswould be obsolete.[6]

Artificial intelligence is an extremely trending field in computer science. Scientists have been trying to make AI more humanlike through the means of machine learning and neural networks. Seems terrifying, but now add quantum computers to the concoction, and it is taken to a whole new level.

Neural networks run on matrix-based data sets, and the processing done in neural networks is computed through the means of matrix algebra. However, quantum computing itself fundamentally works in such a nature that matrices are often used to define and determine the quantum states of qubits.[7] So with that, any computational process done on the neural network would be similar to using transformational quantum gates on qubits. Hence, quantum computers seem like the perfect fit for neural networks incorporated in AI.

Not only that, but quantum computers can also help to significantly speed up machine learning compared to a classical computer. This is why Google has been investing in quantum computer research to improve Google AI by means of quantum hardware.

This is very different from post-quantum cryptography, as it is not meant to prevent quantum computers from breaking cryptosystems, though it does that, anyway. This type of cryptography uses the means of quantum mechanics itself. But how is it more versatile than other forms of cryptography?

Quantum cryptography mainly focuses on the key distribution part of a cryptosystem, here two pairs of entangled qubits are used. One is sent to the receiver, while the sender keeps the other. Entangled particles in a superposition, when measured, affect the other qubit. Send a stream of these qubits, and you have a key usable for encryption.[8]

The best part about it is that eavesdropping is impossible, as the qubits cannot be copied. They cant be measured, either, as there are methods to determine whether the qubit has been tampered with before being received by the intended recipient. This makes it a robust method for cryptography, which is why scientists are still researching this field.

Weve all had that time where weve checked the weather forecast, and it said that it was going to be a wonderful, sunny day. Then, only moments later, it starts to pour, and you didnt bring your umbrella. Well, it seems quantum computers might have a solution for that.

In 2017, a Russian researcher published a paper about the possibility of using quantum computers to predict the weather more accurately than classical computers. There are a few limitations with current computers in predicting all the intricate changes in weather.[9] This is because large amounts of data are involved, but quantum computers seem to offer a big speedup compared to classical means because of Dynamic Quantum Clustering (DQC) methodology, which is claimed to generate useful datasets that classical techniques cannot.

Even so, it must be noted that not even quantum computers can predict the weather with absolute accuracy, but at least it will be less likely that we will regret not bringing an umbrella on suspicious sunny days!

We all hate it when we search for an article, only to find it to be littered with advertisements. Most of it doesnt even seem relevant! Luckily, Recruit Communications has found a solution for one of those two problemsthe relevancy of ads.

In their research, they explained how quantum annealing can be used to help companies wanting to advertise to reach a wider range of people without spending too much. The quantum annealing can be used to match relevant advertisements to customers so that theyre more likely to click them.[10]

With all the speedup quantum computers offer in the computing field, one thing gamers might be curious about is whether they can be used to make a sweet gaming rig which can run games at blazing high framerates. The answer would be, Sort of.

At this point, the field of quantum computers is still at its infancy, and current hardware still hasnt reached quantum supremacywhich is when quantum hardware can compute faster than the current best computers, though the definition is still vague. This is because quantum computer algorithms work very differently from classical ones. Even with that, quantum gaming still seems to be possible.

There have been a few games which have been developed to utilize quantum computers. One of them is called Quantum Battleships, which is based on the Battleships board game.[11] Furthermore, Microsoft has been working on a programming language called Q#, which uses both classical and quantum hardware to compute. It is also very similar to C#, which would mean that it is very possible to develop games using Q# that take advantage of quantum hardware. Maybe well have Call of Duty Q one day!

I am a small music producer from the UK with a newly acquired side hobby for writing articles!

Write A List And Earn $100! Here's How...

More here:
Top 10 Unexpected Future Applications Of Quantum Computers ...

Harnessing the power of light: A European history of photonics – EURACTIV

Europe has a long and rich history of harnessing the power of light to extend the technical and practical capacities of the human species. The modern-day utilisation of light for such means takes its form in the technology of photonics, and today, Europes clout in the arena is formidable.

Currently, the continent ranks second only to China in the global photonics market, and projections estimate that the sector could attain a compound annual growth rate of 8.6% leading up to 2022.

While today photonics technologies are used in high-tech applications such as quantum computing applications, Internet of Things devices, wearable devices, self-driving cars, and healthcare technologies, the origin of Europes relationship with light technologies stretches back millennia.

In order to unlock the technological possibilities of tomorrow, our ancestors first had to wrestle with the mystifying theoretical foundation of the material property known as light, and it befell one of Europes most dominant civilisations, the ancient Greeks, to first pursue this tract.

One of the earliest influential documentations on materials theories of light appeared in mathematician Euclids treatise on vision, whose earliest surviving manuscript dates from the 10th century.

Euclids postulated over the geometrical properties of light leading him to conceptualise the law of reflection. Euclid, along with Greek mathematician Ptolemy, subscribed to what is known as emission theory the notion that the visible perception of things occurred as a result of the eyes themselves emitting rays of light.

Inspired by Euclid and Ptolemys work, the Arab mathematician Ibn al-Haytham hypothesised that the objects themselves radiate light.

The next most relevant development on photonics back in Europe came by way of Issac Newtons work in the 17th century. Based on his renowned prism experiment, he concluded that light is a mixture of various colours having different refractivity, which eventually formed the basis for his Light Particle Theory as outlined in the 1704 title Opticks.

One of the main opponents to Newtons theory was the Dutch mathematician Christiaan Huygens, who, being inspired by Rene Descartes 1637 treatise, Dioptrics, believed that light took the form of waves.

Planck and Einstein make the leap

But it finally fell upon Max Planck and then Albert Einstein to make the greatest scientific leaps in photonics research, and reveal the true nature of light.

Plancks contribution to the world of quantum physics was a momentous leap in the pursuance of photonics technologies. In 1900 Planck managed to find an association between the amount of energy that a photon is able to carry and the frequency of the wave by which it travels giving rise to the now famous Plancks Constant theory.

In 1905, Einstein published a paper refuting the commonly accepted proposition that a light-beam is a wave travelling through space, contending instead that it is an amalgam of discrete wave packets, later dubbed photons, that each contain a quantity of energy. Einstein discovered that as part of the photoelectric effect, the phenomena of photons striking elections, light was never made up of merely waves nor particles, but in fact both.

Einstein has settled the age-old theory on the material properties of light, and in so doing, was awarded the 1921 Nobel Prize for Physics.

Einstein is the father of modern photonics technologies, and without his findings, many of the applications used across Europes optical industries would probably never have come into being.

In terms of European innovation, Einsteins work became fundamental in many later technological developments, including Hungarian-British scientist Dennis Gabors 1948 invention of holograms, and more modern applications, such as the University of Regensburg in Germanys research into how laser-light pulses can be used in quantum computing.

Revolutionary potential

More broadly, from computer screens to lasers in healthcare devices and solar panels, from cameras in smartphones to optical fibre technologies, the revolutionary potential of photonics has been recognised by the European Commission as a Key Enabling Technology of the 21st century.

In this vein, a 2018 report by the European Investment Bank recognised the potential of photonics technologies to enrich and extend the capabilities of other next-generation applications, which, without Europes history in scientific research, would never have been possible.

Deep tech applications such as artificial intelligence, big data, additive manufacturing, robotics, the Internet of Things (IoT), and autonomous driving will require faster, more reliable, more energy efficient and more powerful photonics and semiconductor components, the report states.

The success of Europe in this next wave of innovation will ultimately depend on photonics and semiconductor components.

With Europes valiant scientific excursions into the theory of light and photoelectric research being well-established, there are also those who have touted photonics as an area in which the wider political goals of the European Union can be pursued.

While the Von der Leyen Commission has been quick to employ the term sovereignty across the digital and data fields, there are those who believe that amid the current global economic climate, Europe must place an emphasis on an industry that bears the development of so many other technologies.

A recent paper entitled Exploration of Photonics Markets,published by the industry lobby Photonics21, found that Chinas annual spending in photonics will hit 1 billion in 2020.

There are concerns that Europes well-established research in light technologies could fall by the wayside while larger global players commit to substantial investments.

A December 2018 letter penned by leading scientists in the field brought these concerns to the fore, highlighting the importance of photonics technologies playing a central role in the Digital and Industry section of the next Horizon budget 2021-2027.

Carlos Lee, director-general of the European Photonics Industry (EPIC), recently told EURACTIV that photonics technologies should be heralded as a European success story.

And, looking at the figures, its hard to disagree. Estimates published by EPIC show that the photonics sector, built up predominantly of SMEs, features around 5,000 companies that have created more than 300,000 skilled jobs, with an annual turnover of 60 billion.

These fast-growing figures are a testament to Europes intellectual, scientific and philosophical history in theorising the properties of light, and how such a source can be harnessed to transform our technological landscape.

Only time will tell whether the continent will be able to distinguish itself further in this domain by ensuring that photonics remains at the forefront of the technological developments of tomorrow.

[Edited by Zoran Radosavljevic]

The rest is here:
Harnessing the power of light: A European history of photonics - EURACTIV