D-Wave Announces Promotion of Dr. Alan Baratz to CEO – GlobeNewswire

BURNABY, British Columbia, Dec. 09, 2019 (GLOBE NEWSWIRE) -- D-Wave Systems Inc., the leader in quantum computing systems, software, and services, today announced that Dr. Alan Baratz will assume the role of chief executive officer (CEO), effective January 1, 2020. Baratz joined D-Wave in 2017 and currently serves as the chief product officer and executive vice president of research and development for D-Wave. He takes over from the retiring CEO, Vern Brownell.

Baratzs promotion to CEO follows the launch of Leap, D-Waves quantum cloud service, in October 2018, and comes in advance of the mid-2020 launch of the companys next-generation quantum system, Advantage.

Baratz has driven the development, delivery, and support of all of D-Waves products, technologies, and applications in recent years. He has over 25 years of experience in product development and bringing new products to market at leading technology companies and software startups. As the first president of JavaSoft at Sun Microsystems, Baratz oversaw the growth and adoption of the Java platform from its infancy to a robust platform supporting mission-critical applications in nearly 80 percent of Fortune 1000 companies. He has also held executive positions at Symphony, Avaya, Cisco, and IBM. He served as CEO and president of Versata, Zaplet, and NeoPath Networks, and as a managing director at Warburg Pincus LLC. Baratz holds a doctorate in computer science from the Massachusetts Institute of Technology.

I joined D-Wave to bring quantum computing technology to the enterprise. Now more than ever, I am convinced that making practical quantum computing available to forward-thinking businesses and emerging quantum developers through the cloud is central to jumpstarting the broad development of in-production quantum applications, said Baratz, chief product officer and head of research and development. As I assume the CEO role, Ill focus on expanding the early beachheads for quantum computing that exist in manufacturing, mobility, new materials creation, and financial services into real value for our customers. I am honored to take over the leadership of the company and work together with the D-Wave team as we begin to deliver real business results with our quantum computers.

The company also announced that CEO Vern Brownell has decided to retire at the end of the year in order to spend more time at his home in Boston with his family. Baratz will become CEO at that time. During Brownells tenure, D-Wave developed four generations of commercial quantum computers, raised over $170 million in venture funding, and secured its first customers, including Lockheed Martin, Google and NASA, and Los Alamos National Laboratory. Brownell will continue to serve as an advisor to the board.

There are very few moments in your life when you have the opportunity to build an entirely new market. My 10 years at D-Wave have been rich with breakthroughs, like selling the first commercial quantum computer. I am humbled to have been a part of building the quantum ecosystem, said Brownell, retiring D-Wave CEO. Alan has shown tremendous leadership in our technology and product development efforts, and I am working with him to transition leadership of the entire business. This is an exciting time for quantum computing and an exciting time for D-Wave. I cant imagine a better leader than Alan at the helm for the next phase of bringing practical quantum computing to enterprises around the world.

With cloud access and the development of more than 200 early applications, quantum computing is experiencing explosive growth. We are excited to recognize Alans work in bringing Leap to market and building the next-generation Advantage system. And as D-Wave expands their Quantum-as-a-Service offerings, Alans expertise with growing developer communities and delivering SaaS solutions to enterprises will be critical for D-Waves success in the market, said Paul Lee, D-Wave board chair. I want to thank Vern for his 10 years of contributions to D-Wave. He was central in our ability to be the first to commercialize quantum computers and has made important contributions not only to D-Wave, but also in building the quantum ecosystem.

About D-Wave Systems Inc.D-Wave is the leader in the development and delivery of quantum computing systems, software, and services and is the worlds first commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing for the world. We do this by delivering customer value with practical quantum applications for problems as diverse as logistics, artificial intelligence, materials sciences, drug discovery, cybersecurity, fault detection, and financial modeling. D-Waves systems are being used by some of the worlds most advanced organizations, including Volkswagen, DENSO, Lockheed Martin, USRA, USC, Los Alamos National Laboratory, and Oak Ridge National Laboratory. With headquarters near Vancouver, Canada, D-Waves US operations are based in Palo Alto, CA and Bellevue, WA. D-Wave has a blue-chip investor base including PSP Investments, Goldman Sachs, BDC Capital, DFJ, In-Q-Tel, BDC Capital, PenderFund Capital, 180 Degree Capital Corp., and Kensington Capital Partners Limited. For more information, visit: http://www.dwavesys.com.

Contact D-Wave Systems Inc.dwave@launchsquad.com

View post:

D-Wave Announces Promotion of Dr. Alan Baratz to CEO - GlobeNewswire

Theres No Such Thing As The Machine Learning Platform – Forbes

In the past few years, you might have noticed the increasing pace at which vendors are rolling out platforms that serve the AI ecosystem, namely addressing data science and machine learning (ML) needs. The Data Science Platform and Machine Learning Platform are at the front lines of the battle for the mind share and wallets of data scientists, ML project managers, and others that manage AI projects and initiatives. If youre a major technology vendor and you dont have some sort of big play in the AI space, then you risk rapidly becoming irrelevant. But what exactly are these platforms and why is there such an intense market share grab going on?

The core of this insight is the realization that ML and data science projects are nothing like typical application or hardware development projects. Whereas in the past hardware and software development aimed to focus on the functionality of systems or applications, data science and ML projects are really about managing data, continuously evolving learning gleaned from data, and the evolution of data models based on constant iteration. Typical development processes and platforms simply dont work from a data-centric perspective.

It should be no surprise then that technology vendors of all sizes are focused on developing platforms that data scientists and ML project managers will depend on to develop, run, operate, and manage their ongoing data models for the enterprise. To these vendors, the ML platform of the future is like the operating system or cloud environment or mobile development platform of the past and present. If you can dominate market share for data science / ML platforms, you will reap rewards for decades to come. As a result, everyone with a dog in this fight is fighting to own a piece of this market.

However, what does a Machine Learning platform look like? How is it the same or different than a Data Science platform? What are the core requirements for ML Platforms, and how do they differ from more general data science platforms? Who are the users of these platforms, and what do they really want? Lets dive deeper.

What is the Data Science Platform?

Data scientists are tasked with wrangling useful information from a sea of data and translating business and operational informational needs into the language of data and math. Data scientists need to be masters of statistics, probability, mathematics, and algorithms that help to glean useful insights from huge piles of information. A data scientist creates data hypothesis, runs tests and analysis of the data, and then translates their results for someone else in the organization to easily view and understand. So it follows that a pure data science platform would meet the needs of helping craft data models, determining the best fit of information to a hypothesis, testing that hypothesis, facilitating collaboration amongst teams of data scientists, and helping to manage and evolve the data model as information continues to change.

Furthermore, data scientists dont focus their work in code-centric Integrated Development Environments (IDEs), but rather in notebooks. First popularized by academically-oriented math-centric platforms like Mathematica and Matlab, but now prominent in the Python, R, and SAS communities, notebooks are used to document data research and simplify reproducibility of results by allowing the notebook to run on different source data. The best notebooks are shared, collaborative environments where groups of data scientists can work together and iterate models over constantly evolving data sets. While notebooks dont make great environments for developing code, they make great environments to collaborate, explore, and visualize data. Indeed, the best notebooks are used by data scientists to quickly explore large data sets, assuming sufficient access to clean data.

However, data scientists cant perform their jobs effectively without access to large volumes of clean data. Extracting, cleaning, and moving data is not really the role of a data scientist, but rather that of a data engineer. Data engineers are challenged with the task of taking data from a wide range of systems in structured and unstructured formats, and data which is usually not clean, with missing fields, mismatched data types, and other data-related issues. In this way, the role of a data engineer is an engineer who designs, builds and arranges data. Good data science platforms also enable data scientists to easily leverage compute power as their needs grow. Instead of copying data sets to a local computer to work on them, platforms allow data scientists to easily access compute power and data sets with minimal hassle. A data science platform is challenged with the needs to provide these data engineering capabilities as well. As such, a practical data science platform will have elements of data science capabilities and necessary data engineering functionality.

What is the Machine Learning Platform?

We just spent several paragraphs talking about data science platforms and not even once mentioned AI or ML. Of course, the overlap is the use of data science techniques and machine learning algorithms applied to the large sets of data for the development of machine learning models. The tools that data scientists use on a daily basis have significant overlap with the tools used by ML-focused scientists and engineers. However, these tools arent the same, because the needs of ML scientists and engineers are not the same as more general data scientists and engineers.

Rather than just focusing on notebooks and the ecosystem to manage and work collaboratively with others on those notebooks, those tasked with managing ML projects need access to the range of ML-specific algorithms, libraries, and infrastructure to train those algorithms over large and evolving datasets. An ideal ML platforms helps ML engineers, data scientists, and engineers discover which machine learning approaches work best, how to tune hyperparameters, deploy compute-intensive ML training across on-premise or cloud-based CPU, GPU, and/or TPU clusters, and provide an ecosystem for managing and monitoring both unsupervised as well as supervised modes of training.

Clearly a collaborative, interactive, visual system for developing and managing ML models in a data science platform is necessary, but its not sufficient for an ML platform. As hinted above, one of the more challenging parts of making ML systems work is the setting and tuning of hyperparameters. The whole concept of a machine learning model is that it requires various parameters to be learned from the data. Basically, what machine learning is actually learning are the parameters of the data, and fitting new data to that learned model. Hyperparameters are configurable data values that are set prior to training an ML model that cant be learned from data. These hyperparameters indicate various factors such as complexity, speed of learning, and more. Different ML algorithms require different hyperparameters, and some dont need any at all. ML platforms help with the discovery, setting, and management of hyperparameters, among other things including algorithm selection and comparison that non-ML specific data science platforms dont provide.

The different needs of big data, ML engineering, model management, operationalization

At the end of the day, ML project managers simply want tools to make their jobs more efficient and effective. But not all ML projects are the same. Some are focused on conversational systems, while others are focused on recognition or predictive analytics. Yet others are focused on reinforcement learning or autonomous systems. Furthermore, these models can be deployed (or operationalized) in various different ways. Some models might reside in the cloud or on-premise servers while others are deployed to edge devices or offline batch modes. These differences in ML application, deployment, and needs between data scientists, engineers, and ML developers makes the concept of a single ML platform not particularly feasible. It would be a jack of all trades and master of none.''

As such, we see four different platforms emerging. One focused on the needs of data scientists and model builders, another focused on big data management and data engineering, yet another focused on model scaffolding and building systems to interact with models, and a fourth focused on managing the model lifecycle - ML Ops. The winners will focus on building out capabilities for each of these parts.

The Four Environments of AI (Source: Cognilytica)

The winners in the data science platform race will be the ones that simplify ML model creation, training, and iteration. They will make it quick and easy for companies to move from dumb unintelligent systems to ones that leverage the power of ML to solve problems that previously could not be addressed by machines. Data science platforms that dont enable ML capabilities will be relegated to non-ML data science tasks. Likewise, those big data platforms that inherently enable data engineering capabilities will be winners. Similarly, application development tools will need to treat machine learning models as first-class participants in their lifecycle just like any other form of technology asset. Finally, the space of ML operations (ML Ops) is just now emerging and will no doubt be big news in the next few years.

When a vendor tells you they have an AI or ML platform, the right response is to say which one?. As you can see, there isnt just one ML platform, but rather different ones that serve very different needs. Make sure you dont get caught up in the marketing hype of some of these vendors with what they say they have with what they actually have.

Excerpt from:

Theres No Such Thing As The Machine Learning Platform - Forbes

Israelis develop ‘self-healing’ cars powered by machine learning and AI – The Jerusalem Post

Even before autonomous vehicles become a regular sight on our streets, modern cars are quickly resembling sophisticated computers on wheels.Increasingly connected vehicles come with as many as 150 million lines of code, far exceeding the 145,000 lines of code required to land Apollo 11 on the Moon in 1969. Self-driving cars could require up to one billion lines of code.For manufacturers, passengers and repair shops alike, vehicles running on software rather than just machines represent an unprecedented world of highly complex mobility. Checking the engine, tires and brakes to find a fault will certainly no longer suffice.Seeking to build trust in the new generation of automotive innovation, Tel Aviv-based start-up Aurora Labs has developed software for what it calls the self-healing car a proactive and remote system to detect and fix potential vehicle malfunctions, and update and validate in-car software without any downtime.(From left) Aurora Labs co-founder & CEO Zohar Fox; co-founder & COO Ori Lederman; and EVP Marketing Roger Ordman (Credit: Aurora Labs)The automotive industry is facing its biggest revolution to date, Aurora Labs co-founder and chief operating officer Ori Lederman told The Jerusalem Post. The most critical aspect of all that sophistication and software coming into the car is whether you can trust it, even before you hand over complete autonomy to the car. It poses a lot of challenges to car-makers.New challenges, Lederman added, include whether software problems can be detected after selling the vehicle, whether problems can be solved safely and securely, and whether defects can be solved without interrupting car use. In 2018, some eight million vehicles were recalled in the United States due to software-based defects alone.The human body can detect when something is not quite right before you pass out, said executive vice president of marketing Roger Ordman. The auto-immune system indicates something is wrong and what can be done to fix it: raise your temperature or white blood count. Sometimes the body can do a self-fix, and sometimes thats not enough and needs an external intervention.Our technology has the same kind of approach detecting if something has started to go wrong before it causes a catastrophic failure, indicating exactly where that problem is, doing something to fix it, and keeping it running smoothly.The companys Line-Of-Code Behavior technology, powered by machine learning and artificial intelligence, creates a deep understanding of what software is installed on over 100 vehicle Engine Control Units (ECU), and the relationship between them. In addition to detecting software faults, the technology can enable remote, over-the-air software updates without any downtime.Similar to silent updates automatically implemented by smartphone applications, Ordman added, car manufacturers will be able to update and continuously improve software running on connected vehicles. Of course, manufacturers will be required to meet stringent regulations, developed by bodies including the UNECE, concerning cybersecurity and over-the-air updates.When we joined forces and started developing the idea, we knew our technology was applicable to any connected, smart device or Internet of Things device, said Lederman. The first vertical we wanted to start with is the one that needs us the most, and the biggest market. The need for detecting, managing, recovering and being transparent about software is by far the largest need in the automotive industry as they move from mechanical parts to virtual systems run by lines of code.Rather than requiring mass recalls, Aurora Labs self-healing software will be able to apply short-term fixes to ensure continued functionality and predictability, and subsequently implement comprehensive upgrades to the vehicles systems.The company, which has raised $11.5 million in fund-raising rounds since it was founded in 2016 by Lederman and CEO Zohar Fox, is currently working to implement its technology with some of the worlds leading automotive industry players, including major car-makers in Germany, the United States, Korea and Japan.The fast-growing start-up also has offices in Michigan and the North Macedonian capital of Skopje, and owns a subsidiary near Munich.Customers ought to start being aware of how sophisticated their cars are, said Lederman. When they buy a new car, they should want to ask the dealership that they have the ability to detect, fix and recover so they dont need to go the dealership. Its something they would want to have. Just as the safety performance of cars in Europe are ranked according to the five-star NCAP standard, Ordman believes there should be an additional star for software safety and security.There should be as many self-healing systems in place as possible to enable that, when inevitably something does go wrong, there are systems in place to detect and fix them and maintain uptime, said Ordman.Does the software running in the vehicle have the right cybersecurity in place? Does it have right recovery technologies in place? Can it continuously and safely improve over time?With these functionalities, youre not just dealing with five stars of the physical but adding another star for the software safety and security. It is about giving the trust to the consumer. Im getting a car that will safeguard me and my family as I move forward.

View post:

Israelis develop 'self-healing' cars powered by machine learning and AI - The Jerusalem Post

The challenge in Deep Learning is to sustain the current pace of innovation, explains Ivan Vasilev, machine learning engineer – Packt Hub

If we talk about recent breakthroughs in the software community, machine learning and deep learning is a major contender the usage, adoption, and experimentation of deep learning has exponentially increased. Especially in the areas of computer vision, speech, natural language processing and understanding, deep learning has made unprecedented progress. GANs, variational autoencoders and deep reinforcement learning are also creating impressive AI results.

To know more about the progress of deep learning, we interviewed Ivan Vasilev, a machine learning engineer and researcher based in Bulgaria. Ivan is also the author of the book Advanced Deep Learning with Python. In this book, he teaches advanced deep learning topics like attention mechanism, meta-learning, graph neural networks, memory augmented neural networks, and more using the Python ecosystem. In this interview, he shares his experiences working on this book, compares TensorFlow and PyTorch, as well as talks about computer vision, NLP, and GANs.

Computer Vision and Natural Language processing are two popular areas where a number of developments are ongoing. In his book, Advanced Deep Learning with Python, Ivan delves deep into these two broad application areas. One of the reasons I emphasized computer vision and NLP, he clarifies, is that these fields have a broad range of real-world commercial applications, which makes them interesting for a large number of people.

The other reason for focusing on Computer Vision, he says is because of the natural (or human-driven if you wish) progress of deep learning. One of the first modern breakthroughs was in 2012, when a solution based on convolutional network won the ImageNet competition of that year with a large margin compared to any previous algorithms. Thanks in part to this impressive result, the interest in the field was renewed and brought many other advances including solving complex tasks like object detection and new generative models like generative adversarial networks. In parallel, the NLP domain saw its own wave of innovation with things like word vector embeddings and the attention mechanism.

There are two popular machine learning frameworks that are currently at par TensorFlow and PyTorch (Both had new releases in the past month, TensorFlow 2.0 and PyTorch 1.3). There is an ongoing debate that pitches TensorFlow and PyTorch as rivaling tech and communities. Ivan does not think there is a clear winner between the two libraries and this is why he has included them both in the book.

He explains, On the one hand, it seems that the API of PyTorch is more streamlined and the library is more popular with the academic community. On the other hand, TensorFlow seems to have better cloud support and enterprise features. In any case, developers will only benefit from the competition. For example, PyTorch has demonstrated the importance of eager execution and TensorFlow 2.0 now has much better support for eager execution to the point that it is enabled by default. In the past, TensorFlow had internal competing APIs, whereas now Keras is promoted as its main high-level API. On the other hand, PyTorch 1.3 has introduced experimental support for iOS and Android devices and quantization (computation operations with reduced precision for increased efficiency).

Ivan discusses his venture into the field of financial machine learning, being the author of an ML-oriented event-based algorithmic trading library. However, financial machine learning (and stock price prediction in particular) is usually not in the focus of mainstream deep learning research. One reason, Ivan states, is that the field isnt as appealing as, say, computer vision or NLP. At first glance, it might even appear gimmicky to predict stock prices.

He adds, Another reason is that quality training data isnt freely available and can be quite expensive to obtain. Even if you have such data, pre-processing it in an ML-friendly way is not a straightforward process, because the noise-to-signal ratio is a lot higher compared to images or text. Additionally, the data itself could have huge volume.

However, he counters, using ML in finance could have benefits, besides the obvious (getting rich by trading stocks). The participation of ML algorithms in the stock trading process can make the markets more efficient. This efficiency will make it harder for market imbalances to stay unnoticed for long periods of time. Such imbalances will be corrected early, thus preventing painful market corrections, which could otherwise lead to economic recessions.

Ivan has also given a special emphasis to Generative adversarial networks in his book. Although extremely useful, in recent times GANs have been used to generate high-dimensional fake data that look very convincing. Many researchers and developers have raised concerns about the negative repercussions of using GANs and wondered if it is even possible to prevent and counter its misuse/abuse.

Ivan acknowledges that GANs may have unintended outcomes but that shouldnt be the sole reason to discard them. He says, Besides great entertainment value, GANs have some very useful applications and could help us better understand the inner workings of neural networks. But as you mentioned, they can be used for nefarious purposes as well. Still, we shouldnt discard GANs (or any algorithm with similar purpose) because of this. If only because the bad actors wont discard them. I think the solution to this problem lies beyond the realm of deep learning. We should strive to educate the public on the possible adverse effects of these algorithms, but also to their benefits. In this way we can raise the awareness of machine learning and spark an honest debate about its role in our society.

Awareness and Ethics go in parallel. Ethics is one of the most important topics to emerge in machine learning and artificial intelligence over the last year. Ivan agrees that the ethics and algorithmic bias in machine learning are of extreme importance. He says, We can view the potential harmful effects of machine learning as either intentional and unintentional. For example, the bad actors I mentioned when we discussed GANs fall into the intentional category. We can limit their influence by striving to keep the cutting edge of ML research publicly available, thus denying them any unfair advantage of potentially better algorithms. Fortunately, this is largely the case now and hopefully will remain that way in the future.

I dont think algorithmic bias is necessarily intentional, he says. Instead, I believe that it is the result of the underlying injustices in our society, which creep into ML through either skewed training datasets or unconscious bias of the researchers. Although the bias might not be intentional, we still have a responsibility to put a conscious effort to eliminate it.

The field of ML exploded (in a good sense) a few years ago, says Ivan, thanks to a combination of algorithmic and computer hardware advances. Since then, the researches have introduced new smarter and more elegant deep learning algorithms. But history has shown that AI can generate such a great hype that even the impressive achievements of the last few years could fall short of the expectations of the general public.

So, in a broader sense, the challenge in front of ML is to sustain the current pace of innovation. In particular, current deep learning algorithms fall short in some key intelligence areas, where humans excel. For example, neural networks have a hard time learning multiple unrelated tasks. They also tend to perform better when working with unstructured data (like images), compared to structured data (like graphs).

Another issue is that neural networks sometimes struggle to remember long-distance dependencies in sequential data. Solving these problems might require new fundamental breakthroughs, and its hard to give an estimation of such one time events. But even at the current level, ML can fundamentally change our society (hopefully for the better). For instance, in the next 5 to 10 years, we can see the widespread introduction of fully autonomous vehicles, which have the potential to transform our lives.

This is just a snapshot of some of the important focus areas in the deep learning ecosystem. You can check out more of Ivans work in his book Advanced Deep Learning with Python. In this book you will investigate and train CNN models with GPU accelerated libraries like TensorFlow and PyTorch. You will also apply deep neural networks to state-of-the-art domains like computer vision problems, NLP, GANs, and more.

Ivan Vasilev started working on the first open source Java Deep Learning library with GPU support in 2013. The library was acquired by a German company, where he continued its development. He has also worked as a machine learning engineer and researcher in the area of medical image classification and segmentation with deep neural networks. Since 2017 he has focused on financial machine learning. He is working on a Python based platform, which provides the infrastructure to rapidly experiment with different ML algorithms for algorithmic trading. You can find him on Linkedin and GitHub.

Kaggles Rachel Tatman on what to do when applying deep learning is overkill

Brad Miro talks TensorFlow 2.0 features and how Google is using it internally

Franois Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in deep learning and more

The rest is here:

The challenge in Deep Learning is to sustain the current pace of innovation, explains Ivan Vasilev, machine learning engineer - Packt Hub

The Afghanistan papers: The criminality and disaster of a war based upon lies – World Socialist Web Site

The Afghanistan papers: The criminality and disaster of a war based upon lies 10 December 2019

The publication Monday by the Washington Post of interviews with senior US officials and military commanders on the nearly two-decades-old US war in Afghanistan has provided a damning indictment of both the criminality and abject failure of an imperialist intervention conducted on the basis of lies.

The Post obtained the raw interviews after a three-year Freedom of Information Act court battle. While initially they were not secret, the Obama administration moved to classify the documents after the newspaper sought to obtain them.

The interviews were conducted between 2014 and 2018 in a Lessons Learned project initiated by the office of the Special Inspector General for Afghanistan Reconstruction (SIGAR). The project was designed to review the failures of the Afghanistan intervention with the aim of preventing their repetition the next time US imperialism seeks to carry out an illegal invasion and occupation of an oppressed country.

SIGARs director, John Sopko, freely admitted to the Post that the interviews provided irrefutable evidence that the American people have constantly been lied to about the war in Afghanistan.

What emerges from the interviews, conducted with more than 400 US military officers, special forces operatives, officials from the US Agency for International Development (USAID) and senior advisers to both US commanders in Afghanistan and the White House, is an overriding sense of failure tinged with bitterness and cynicism. Those who participated had no expectation that their words would be made public.

Douglas Lute, a retired Army lieutenant general who served as the Afghanistan war czar under the administrations of both George W. Bush and Barack Obama, told his government interviewers in 2015, If the American people knew the magnitude of this dysfunction... 2,400 [American] lives lost. Who will say this war was in vain?

Stephen Hadley, the White House national security adviser under Bush, was even more explicit in his admission of US imperialisms debacle in Afghanistanand elsewhere. He told his SIGAR interviewers that Washington had no post-stabilization model that works, adding that this had been proven not only in Afghanistan, but in Iraq as well. Every time we have one of these things, it is a pickup game. I dont have any confidence that if we did it again, we would do any better.

Ryan Crocker, who served as Washingtons senior man in Kabul under both Bush and Obama, told SIGAR that Our biggest single project, sadly and inadvertently, of course, may have been the development of mass corruption. Once it gets to the level I saw, when I was out there, its somewhere between unbelievably hard and outright impossible to fix it.

This corruption was fed by vast expenditures on the part of the US government on Afghanistans supposed reconstruction$133 billion, more than Washington spent, adjusted for inflation, on the entire Marshal Plan for the reconstruction of Western Europe after the Second World War. As the interviews make clear, this money went largely into the pockets of corrupt Afghan politicians and contractors and to fund projects that were neither needed nor wanted by the Afghan people.

The US National Endowment for Democracys former senior program officer for Afghanistan told his interviewers that Afghans with whom he had worked were in favor of a socialist or communist approach because thats how they remembered things the last time the system worked, i.e., before the 1980s CIA-backed Islamist insurgency that toppled a Soviet-backed government and unleashed a protracted civil war that claimed the lives of over a million. He also blamed the failure of US reconstruction efforts on a dogmatic adherence to free-market principles.

An Army colonel who advised three top US commanders in Afghanistan told the interviewers that, by 2006, the US-backed puppet government in Kabul had self-organized into a kleptocracy.

US military personnel engaged in what has supposedly been a core mission of training Afghan security forces to be able to fight on their own to defend the corrupt US-backed regime in Kabul were scathing in their assessments.

A special forces officer told interviewers that the Afghan police whom his troops had trained were awfulthe bottom of the barrel in the country that is already at the bottom of the barrel, estimating that one third of the recruits were drug addicts or Taliban. Another US adviser said that the Afghans that he worked with reeked of jet fuel because they were constantly smuggling it out of the base to sell on the black market.

Faced with the continuing failure of its attempts to quell the insurgency in Afghanistan and create a viable US-backed regime and army, US officials lied. Every president and his top military commanders, from Bush to Obama to Trump, insisted that progress was being made and the US was winning the war, or, as Trump put it during his lightning Thanksgiving trip in and out of Afghanistan, was victorious on the battlefield.

The liars in the White House and the Pentagon demanded supporting lies from those on the ground in Afghanistan. Surveys, for instance, were totally unreliable, but reinforced that everything we were doing was right and we became a self-licking ice cream cone, an Army counterinsurgency adviser to the Afghanistan commanders told SIGAR.

A National Security Council official explained that every reversal was spun into a sign of progress: For example, attacks are getting worse? Thats because there are more targets for them to fire at, so more attacks are a false indicator of instability. Then, three months later, attacks are still getting worse? Its because the Taliban are getting desperate, so its actually an indicator that were winning. The purpose of these lies was to justify the continued deployment of US troops and the continued carnage in Afghanistan.

Today, the carnage is only escalating. According to the United Nations, last year 3,804 Afghan civilians were killed in the war, the highest number since the UN began counting casualties over a decade ago. US airstrikes have also been rising to an all-time high, killing 579 civilians in the first 10 months of this year, a third more than in 2018.

The lies exposed by the SIGAR interviews have been echoed by a pliant corporate media that has paid scant attention to the longest war in US history. The most extensive exposure of US war crimes in Afghanistan came in 2010, based on some 91,000 secret documents provided by the courageous US Army whistleblower Chelsea Manning to WikiLeaks. Julian Assange, the founder of WikiLeaks, is now being held in Britains maximum security Belmarsh Prison facing extradition to the United States on Espionage Act charges that carry a penalty of life imprisonment or worse for the crime of exposing these war crimes. Manning is herself imprisoned in US Federal detention center in Virginia for refusing to testify against Assange.

On October 9, 2001, two days after Washington launched its now 18-year-long war on Afghanistan and amid a furor of war propaganda from the US government and the corporate media, the World Socialist Web Site posted a statement titled Why we oppose the war in Afghanistan. It exposed the lie that this was a war for justice and the security of the American people against terrorism and insisted that the present action by the United States is an imperialist war in which Washington aimed to establish a new political framework within which it will exert hegemonic control over not only Afghanistan, but over the broader region of Central Asia, home to the second largest deposit of proven reserves of petroleum and natural gas in the world.

The WSWS stated at the time: The United States stands at a turning point. The government admits it has embarked on a war of indefinite scale and duration. What is taking place is the militarization of American society under conditions of a deepening social crisis.

The war will profoundly affect the conditions of the American and international working class. Imperialism threatens mankind at the beginning of the twenty-first century with a repetition on a more horrific scale of the tragedies of the twentieth. More than ever, imperialism and its depredations raise the necessity for the international unity of the working class and the struggle for socialism.

These warnings have been borne out entirely by the criminal and tragic events of the last 18 years, even as the Washington Post now finds itself compelled to admit the bankruptcy of the entire sordid intervention in Afghanistan that it previously supported.

The US debacle in Afghanistan is only the antechamber of a far more dangerous eruption of US militarism, as Washington shifts its global strategy from the war on terrorism to preparation for war against its great power rivals, in the first instance, nuclear-armed China and Russia.

Opposition to war and the defense of democratic rightsposed most sharply in the fight for the freedom of Julian Assange and Chelsea Manningmust be guided by a global strategy that consciously links this fight to the growing eruption of social struggles of the international working class against capitalist exploitation and political oppression.

Bill Van Auken

2019 has been a year of mass social upheaval. We need you to help the WSWS and ICFI make 2020 the year of international socialist revival. We must expand our work and our influence in the international working class. If you agree, donate today. Thank you.

Read this article:
The Afghanistan papers: The criminality and disaster of a war based upon lies - World Socialist Web Site

Could quantum computing be the key to cracking congestion? – SmartCitiesWorld

The technology has helped to improve congestion by 73 per cent in scenario-testing

Ford and Microsoft are using quantum-inspired computing technology to reduce traffic congestion. Through a joint research pilot, scientists have used the technology to simulate thousands of vehicles and their impact on congestion in the US city of Seattle.

Ford said it is still early in the project but encouraging progress has been made and it is further expanding its partnership with the tech giant.

The companies teamed up in 2018 to develop new quantum approaches running on classical computers already available to help reduce Seattles traffic congestion.

Writing on a blog post on Medium.com, Dr Ken Washington, chief technology officer, Ford Motor Company, explained that during rush hour, numerous drivers request the shortest possible routes at the same time, but current navigation services handle these requests "in a vacuum": They do not take into consideration the number of similar incoming requests, including areas where other drivers are all planning to share the same route segments, when delivering results.

What is required is a more balanced routing system that could manage all the various route requests from drivers and provide optimised route suggestions, reducing the number of vehicles on a particular road.

These and more are all variables well need to test for to ensure balanced routing can truly deliver tangible improvements for cities.

Traditional computers dont have the computational power to do this but, as Washington explained, in a quantum computer, information is processed by a quantum bit (or a qubit) and can simultaneously exist "in two different states" before it gets measured.

This ultimately enables a quantum computer to process information with a faster speed, he wrote. Attempts to simulate some specific features of a quantum computer on non-quantum hardware have led to quantum-inspired technology powerful algorithms that mimic certain quantum behaviours and run on specialised conventional hardware. That enables organisations to start realising some benefits before fully scaled quantum hardware becomes available."

Working with Microsoft, Ford tested several different possibilities, including a scenario involving as many as 5,000 vehicles each with 10 different route choices available to them simultaneously requesting routes across Metro Seattle. It reports that in 20 seconds, balanced routing suggestions were delivered to the vehicles that resulted in a 73 per cent improvement in total congestion when compared to selfish routing.

The average commute time, meanwhile, was also cut by eight per cent representing an annual reduction of more than 55,000 hours across this simulated fleet.

Based on these results, Ford is expanding its partnership with Microsoft to further improve the algorithm and understand its effectiveness in more real-world scenarios.

For example, will this method still deliver similar results when some streets are known to be closed, if route options arent equal for all drivers, or if some drivers decide to not follow suggested routes? wrote Washington. These and more are all variables well need to test for to ensure balanced routing can truly deliver tangible improvements for cities.

You might also like:

The rest is here:

Could quantum computing be the key to cracking congestion? - SmartCitiesWorld

ProBeat: AWS and Azure are generating uneasy excitement in quantum computing – VentureBeat

Quantum is having a moment. In October, Google claimed to have achieved a quantum supremacy milestone. In November, Microsoft announced Azure Quantum, a cloud service that lets you tap into quantum hardware providers Honeywell, IonQ, or QCI. Last week, AWS announced Amazon Braket, a cloud service that lets you tap into quantum hardware providers D-Wave, IonQ, and Rigetti. At the Q2B 2019 quantum computing conference this week, I got a pulse for how the nascent industry is feeling.

Binary digits (bits) are the basic units of information in classical computing, while quantum bits (qubits) make up quantum computing. Bits are always in a state of 0 or 1, while qubits can be in a state of 0, 1, or a superposition of the two. Quantum computing leverages qubits to perform computations that would be much more difficult for a classical computer. Potential applications are so vast and wide (from basic optimization problems to machine learning to all sorts of modeling) that interested industries span finance, chemistry, aerospace, cryptography, and more. But its still so early that the industry is nowhere close to reaching consensus on what the transistor for qubits should look like.

Currently, your cloud quantum computing options are limited to single hardware providers, such as those from D-Wave and IBM. Amazon and Microsoft want to change that.

Enterprises and researchers interested in testing and experimenting with quantum are excited because they will be able to use different quantum processors via the same service, at least in theory. Theyre uneasy, however, because the quantum processors are so fundamentally different that its not clear how easy it will be to switch between them. D-Wave uses quantum annealing, Honeywell and IonQ use ion trap devices, and Rigetti and QCI use superconducting chips. Even the technologies that are the same have completely different architectures.

Entrepreneurs and enthusiasts are hopeful that Amazon and Microsoft will make it easier to interface with the various quantum hardware technologies. Theyre uneasy, however, because Amazon and Microsoft have not shared pricing and technical details. Plus, some of the quantum providers offer their own cloud services, so it will be difficult to suss out when it makes more sense to work with them directly.

The hardware providers themselves are excited because they get exposure to massive customer bases. Amazon and Microsoft are the worlds biggest and second biggest cloud providers, respectively. Theyre uneasy, however, because the tech giants are really just middlemen, which of course poses its own problems of costs and reliance.

At least right now, it looks like this will be the new normal. Even hardware providers that havent announced they are partnering with Amazon and/or Microsoft, like Xanadu, are in talks to do just that.

Overall at the event, excitement trumped uneasiness. If youre participating in a domain as nascent as quantum, you must be optimistic. The news this quarter all happened very quickly, but there is still a long road ahead. After all, these cloud services have only been announced. They still have to become available, gain exposure, pick up traction, become practical, prove useful, and so on.

The devil is in the details. How much are these cloud services for quantum going to cost? Amazon and Microsoft havent said. When exactly will they be available in preview or in beta? Amazon and Microsoft havent said. How will switching between different quantum processors work in practice? Amazon and Microsoft havent said.

One thing is clear. Everyone at the event was talking about the impact of the two biggest cloud providers offering quantum hardware from different companies. The clear winners? Amazon and Microsoft.

ProBeat is a column in which Emil rants about whatever crosses him that week.

See the original post here:

ProBeat: AWS and Azure are generating uneasy excitement in quantum computing - VentureBeat

Quantum expert Robert Sutor explains the basics of Quantum Computing – Packt Hub

What if we could do chemistry inside a computer instead of in a test tube or beaker in the laboratory? What if running a new experiment was as simple as running an app and having it completed in a few seconds?

For this to really work, we would want it to happen with complete fidelity. The atoms and molecules as modeled in the computer should behave exactly like they do in the test tube. The chemical reactions that happen in the physical world would have precise computational analogs. We would need a completely accurate simulation.

If we could do this at scale, we might be able to compute the molecules we want and need.

These might be for new materials for shampoos or even alloys for cars and airplanes. Perhaps we could more efficiently discover medicines that are customized to your exact physiology. Maybe we could get a better insight into how proteins fold, thereby understanding their function, and possibly creating custom enzymes to positively change our body chemistry.

Is this plausible? We have massive supercomputers that can run all kinds of simulations. Can we model molecules in the above ways today?

This article is an excerpt from the book Dancing with Qubits written by Robert Sutor. Robert helps you understand how quantum computing works and delves into the math behind it with this quantum computing textbook.

Lets start with C8H10N4O2 1,3,7-Trimethylxanthine.

This is a very fancy name for a molecule that millions of people around the world enjoy every day: caffeine. An 8-ounce cup of coffee contains approximately 95 mg of caffeine, and this translates to roughly 2.95 10^20 molecules. Written out, this is

295, 000, 000, 000, 000, 000, 000 molecules.

A 12 ounce can of a popular cola drink has 32 mg of caffeine, the diet version has 42 mg, and energy drinks often have about 77 mg.

These numbers are large because we are counting physical objects in our universe, which we know is very big. Scientists estimate, for example, that there are between 10^49 and 10^50 atoms in our planet alone.

To put these values in context, one thousand = 10^3, one million = 10^6, one billion = 10^9, and so on. A gigabyte of storage is one billion bytes, and a terabyte is 10^12 bytes.

Getting back to the question I posed at the beginning of this section, can we model caffeine exactly on a computer? We dont have to model the huge number of caffeine molecules in a cup of coffee, but can we fully represent a single molecule at a single instant?

Caffeine is a small molecule and contains protons, neutrons, and electrons. In particular, if we just look at the energy configuration that determines the structure of the molecule and the bonds that hold it all together, the amount of information to describe this is staggering. In particular, the number of bits, the 0s and 1s, needed is approximately 10^48:

10, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000.

And this is just one molecule! Yet somehow nature manages to deal quite effectively with all this information. It handles the single caffeine molecule, to all those in your coffee, tea, or soft drink, to every other molecule that makes up you and the world around you.

How does it do this? We dont know! Of course, there are theories and these live at the intersection of physics and philosophy. However, we do not need to understand it fully to try to harness its capabilities.

We have no hope of providing enough traditional storage to hold this much information. Our dream of exact representation appears to be dashed. This is what Richard Feynman meant in his quote: Nature isnt classical.

However, 160 qubits (quantum bits) could hold 2^160 1.46 10^48 bits while the qubits were involved in a computation. To be clear, Im not saying how we would get all the data into those qubits and Im also not saying how many more we would need to do something interesting with the information. It does give us hope, however.

In the classical case, we will never fully represent the caffeine molecule. In the future, with enough very high-quality qubits in a powerful quantum computing system, we may be able to perform chemistry on a computer.

I can write a little app on a classical computer that can simulate a coin flip. This might be for my phone or laptop.

Instead of heads or tails, lets use 1 and 0. The routine, which I call R, starts with one of those values and randomly returns one or the other. That is, 50% of the time it returns 1 and 50% of the time it returns 0. We have no knowledge whatsoever of how R does what it does.

When you see R, think random. This is called a fair flip. It is not weighted to slightly prefer one result over the other. Whether we can produce a truly random result on a classical computer is another question. Lets assume our app is fair.

If I apply R to 1, half the time I expect 1 and another half 0. The same is true if I apply R to 0. Ill call these applications R(1) and R(0), respectively.

If I look at the result of R(1) or R(0), there is no way to tell if I started with 1 or 0. This is just like a secret coin flip where I cant tell whether I began with heads or tails just by looking at how the coin has landed. By secret coin flip, I mean that someone else has flipped it and I can see the result, but I have no knowledge of the mechanics of the flip itself or the starting state of the coin.

If R(1) and R(0) are randomly 1 and 0, what happens when I apply R twice?

I write this as R(R(1)) and R(R(0)). Its the same answer: random result with an equal split. The same thing happens no matter how many times we apply R. The result is random, and we cant reverse things to learn the initial value.

There is a catch, though. You are not allowed to look at the result of what H does if you want to reverse its effect. If you apply H to 0 or 1, peek at the result, and apply H again to that, it is the same as if you had used R. If you observe what is going on in the quantum case at the wrong time, you are right back at strictly classical behavior.

To summarize using the coin language: if you flip a quantum coin and then dont look at it, flipping it again will yield heads or tails with which you started. If you do look, you get classical randomness.

A second area where quantum is different is in how we can work with simultaneous values. Your phone or laptop uses bytes as individual units of memory or storage. Thats where we get phrases like megabyte, which means one million bytes of information.

A byte is further broken down into eight bits, which weve seen before. Each bit can be a 0 or 1. Doing the math, each byte can represent 2^8 = 256 different numbers composed of eight 0s or 1s, but it can only hold one value at a time. Eight qubits can represent all 256 values at the same time

This is through superposition, but also through entanglement, the way we can tightly tie together the behavior of two or more qubits. This is what gives us the (literally) exponential growth in the amount of working memory.

Artificial intelligence and one of its subsets, machine learning, are extremely broad collections of data-driven techniques and models. They are used to help find patterns in information, learn from the information, and automatically perform more intelligently. They also give humans help and insight that might have been difficult to get otherwise.

Here is a way to start thinking about how quantum computing might be applicable to large, complicated, computation-intensive systems of processes such as those found in AI and elsewhere. These three cases are in some sense the small, medium, and large ways quantum computing might complement classical techniques:

As I write this, quantum computers are not big data machines. This means you cannot take millions of records of information and provide them as input to a quantum calculation. Instead, quantum may be able to help where the number of inputs is modest but the computations blow up as you start examining relationships or dependencies in the data.

In the future, however, quantum computers may be able to input, output, and process much more data. Even if it is just theoretical now, it makes sense to ask if there are quantum algorithms that can be useful in AI someday.

To summarize, we explored how quantum computing works and different applications of artificial intelligence in quantum computing.

Get this quantum computing book Dancing with Qubits by Robert Sutor today where he has explored the inner workings of quantum computing. The book entails some sophisticated mathematical exposition and is therefore best suited for those with a healthy interest in mathematics, physics, engineering, and computer science.

Intel introduces cryogenic control chip, Horse Ridge for commercially viable quantum computing

Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions

Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases

See the rest here:

Quantum expert Robert Sutor explains the basics of Quantum Computing - Packt Hub

Will quantum computing overwhelm existing security tech in the near future? – Help Net Security

More than half (54%) of cybersecurity professionals have expressed concerns that quantum computing will outpace the development of other security tech, according to a research from Neustar.

Keeping a watchful eye on developments, 74% of organizations admitted to paying close attention to the technologys evolution, with 21% already experimenting with their own quantum computing strategies.

A further 35% of experts claimed to be in the process of developing a quantum strategy, while just 16% said they were not yet thinking about it. This shift in focus comes as the vast majority (73%) of cyber security professionals expect advances in quantum computing to overcome legacy technologies, such as encryption, within the next five years.

Almost all respondents (93%) believe the next-generation computers will overwhelm existing security technology, with just 7% under the impression that true quantum supremacy will never happen.

Despite expressing concerns that other technologies will be overshadowed, 87% of CISOs, CSOs, CTOs and security directors are excited about the potential positive impact of quantum computing. The remaining 13% were more cautious and under the impression that the technology would create more harm than good.

At the moment, we rely on encryption, which is possible to crack in theory, but impossible to crack in practice, precisely because it would take so long to do so, over timescales of trillions or even quadrillions of years, said Rodney Joffe, Chairman of NISC and Security CTO at Neustar.

Without the protective shield of encryption, a quantum computer in the hands of a malicious actor could launch a cyberattack unlike anything weve ever seen.

For both todays major attacks, and also the small-scale, targeted threats that we are seeing more frequently, it is vital that IT professionals begin responding to quantum immediately.

The security community has already launched a research effort into quantum-proof cryptography, but information professionals at every organization holding sensitive data should have quantum on their radar.

Quantum computings ability to solve our great scientific and technological challenges will also be its ability to disrupt everything we know about computer security. Ultimately, IT experts of every stripe will need to work to rebuild the algorithms, strategies, and systems that form our approach to cybersecurity, added Joffe.

The report also highlighted a steep two-year increase on the International Cyber Benchmarks Index. Calculated based on changes in the cybersecurity landscape including the impact of cyberattacks and changing level of threat November 2019 saw the highest score yet at 28.2. In November 2017, the benchmark sat at just 10.1, demonstrating an 18-point increase over the last couple of years.

During September October 2019, security professionals ranked system compromise as the greatest threat to their organizations (22%), with DDoS attacks and ransomware following very closely behind (21%).

Originally posted here:

Will quantum computing overwhelm existing security tech in the near future? - Help Net Security

How quantum computing is set to impact the finance industry – IT Brief New Zealand

Attempting to explain quantum computing with the comparison between quantum and classical computing is like comparing the world wide web to a typewriter, theres simply next to no comparison.

Thats not to say the typewriter doesnt have its own essential and commercially unique uses. Its just not the same.

However, explaining the enormous impact quantum computing could have if successfully rolled-out and becomes globally accessible is a bit easier.

Archer Materials Limited (ASX:AXE) CEO Dr Mohammad Choucair outlined the impact quantum computing could have on the finance industry.

In an address to shareholders and academics, Dr Choucair outlined that the global financial assets market is estimated to be worth trillions, and Im sure it comes as no surprise that any capability to optimise ones investment portfolio or capitalise on market volatility would be of great value to banks, governments and everyone in the audience.

Traders currently use algorithms to understand and, to a degree, predict the value movement in these markets. An accessible and operating quantum chip would provide immeasurable improvements to these algorithms, along with the machine learning that underpins them.

Archer is a materials technology-focused company that integrates the materials pulled from the ground with the converging materials-based technologies that have the capability to impact global industries including:

It could have an enormous impact on computing and the electric vehicles industries.

The potential for global consumer and business accessibility to quantum computing is the key differentiator between Archer Materials Ltd. and some of the other players in the market.

The companys 12CQ qubit, invented by Dr Choucair, is potentially capable of storing quantum information at room temperature.

As a result of this, the 12CQ chip could be thrown onto the motherboard of the everyday laptop, or tablet if youre tech-savvy, and operate in coexistence with a classical CPU.

This doesnt mean the everyday user can now go and live out a real-world, real-time simulation of The Matrix.

But it does mean that the laptop you have in your new, European leather tote could potentially perform extremely complex calculations to protect digital financial and communication transactions.

To head the progress of the 12CQ Project, Archer hired Dr Martin Fuechsle, a quantum physicist, who is by no means new to the high-performing Australian quantum tech industry.

In fact, Dr Fuechsle invented the worlds first single-atom transistor and offers over 10 years experience in the design, fabrication and integration of quantum devices.

Archer has moved quickly over the last 12 months and landed some significant 12CQ milestones, including the first-stage assembly of the nanoscale qubit processor chip.

Along with the accurate positioning of the qubit componentry with nanoscale precision.

Both of these being key success factors to the commercial and technological readiness of the room-temperature chip.

Most recently, Archer announced the successful and scalable assembly of qubit array components of the 12CQ room-temperature qubit processor. Commenting on the success, Dr Choucair announced: This excellent achievement advances our chip technology development towards a minimum viable product and strengthens our commercial readiness by providing credibility to the claim of 12 CQ chips being potentially scalable.

To build an array of a few qubits in less than a year means we are well and truly on track in our development roadmap taking us into 2020.

The Archer team has commercial agreements in place with the University of Sydney, to access the facilities they need to build chip prototypes at the Research and Prototype Foundry within the world-class, $150 million purpose-built Sydney Nanoscience Hub facility.

Read the original:

How quantum computing is set to impact the finance industry - IT Brief New Zealand