AI could help with the next pandemicbut not with this one – MIT Technology Review

It was an AI that first saw it coming, or so the story goes. On December 30, an artificial-intelligence company called BlueDot, which uses machine learning to monitor outbreaks of infectious diseases around the world, alerted clientsincluding various governments, hospitals, and businessesto an unusual bump in pneumonia cases in Wuhan, China. It would be another nine days before the World Health Organization officially flagged what weve all come to know as Covid-19.

BlueDot wasnt alone. An automated service called HealthMap at Boston Childrens Hospital also caught those first signs. As did a model run by Metabiota, based in San Francisco. That AI could spot an outbreak on the other side of the world is pretty amazing, and early warnings save lives.

You can read all of ourcoverage of the coronavirus/Covid-19 outbreakfor free, and also sign up for ourcoronavirus newsletter. But pleaseconsider subscribingto support our nonprofit journalism..

But how much has AI really helped in tackling the current outbreak? Thats a hard question to answer. Companies like BlueDot are typically tight-lipped about exactly who they provide information to and how it is used. And human teams say they spotted the outbreak the same day as the AIs. Other projects in which AI is being explored as a diagnostic tool or used to help find a vaccine are still in their very early stages. Even if they are successful, it will take timepossibly monthsto get those innovations into the hands of the health-care workers who need them.

The hype outstrips the reality. In fact, the narrative that has appeared in many news reports and breathless press releasesthat AI is a powerful new weapon against diseasesis only partly true and risks becoming counterproductive. For example, too much confidence in AIs capabilities could lead to ill-informed decisions that funnel public money to unproven AI companies at the expense of proven interventions such as drug programs. Its also bad for the field itself: overblown but disappointed expectations have led to a crash of interest in AI, and consequent loss of funding, more than once in the past.

So heres a reality check: AI will not save us from the coronaviruscertainly not this time. But theres every chance it will play a bigger role in future epidemicsif we make some big changes. Most wont be easy. Some we wont like.

There are three main areas where AI could help: prediction, diagnosis, and treatment.

Prediction

Companies like BlueDot and Metabiota use a range of natural-language processing (NLP) algorithms to monitor news outlets and official health-care reports in different languages around the world, flagging whether they mention high-priority diseases, such as coronavirus, or more endemic ones, such as HIV or tuberculosis. Their predictive tools can also draw on air-travel data to assess the risk that transit hubs might see infected people either arriving or departing.

The results are reasonably accurate. For example, Metabiotas latest public report, on February 25, predicted that on March 3 there would be 127,000 cumulative cases worldwide. It overshot by around 30,000, but Mark Gallivan, the firms director of data science, says this is still well within the margin of error. It also listed the countries most likely to report new cases, including China, Italy, Iran, and the US. Again: not bad.

Sign up for The Algorithm artificial intelligence, demystified

Others keep an eye on social media too. Stratifyd, a data analytics company based in Charlotte, North Carolina, is developing an AI that scans posts on sites like Facebook and Twitter and cross-references them with descriptions of diseases taken from sources such as the National Institutes of Health, the World Organisation for Animal Health, and the global microbial identifier database, which stores genome sequencing information.

Work by these companies is certainly impressive. And it goes to show how far machine learning has advanced in recent years. A few years ago Google tried to predict outbreaks with its ill-fated Flu Tracker, which was shelved in 2013 when it failed to predict that years flu spike. What changed? It mostly comes down to the ability of the latest software to listen in on a much wider range of sources.

Unsupervised machine learning is also key. Letting an AI identify its own patterns in the noise, rather than training it on preselected examples, highlights things you might not have thought to look for. When you do prediction, you're looking for new behavior, says Stratifyds CEO, Derek Wang.

But what do you do with these predictions? The initial prediction by BlueDot correctly pinpointed a handful of cities in the viruss path. This could have let authorities prepare, alerting hospitals and putting containment measures in place. But as the scale of the epidemic grows, predictions become less specific. Metabiotas warning that certain countries would be affected in the following week might have been correct, but it is hard to know what to do with that information.

Whats more, all these approaches will become less accurate as the epidemic progresses, largely because reliable data of the sort that AI needs to feed on has been hard to get about Covid-19. News sources and official reports offer inconsistent accounts. There has been confusion over symptoms and how the virus passes between people. The media may play things up; authorities may play things down. And predicting where a disease may spread from hundreds of sites in dozens of countries is a far more daunting task than making a call on where a single outbreak might spread in its first few days. Noise is always the enemy of machine-learning algorithms, says Wang. Indeed, Gallivan acknowledges that Metabiotas daily predictions were easier to make in the first two weeks or so.

One of the biggest obstacles is the lack of diagnostic testing, says Gallivan. Ideally, we would have a test to detect the novel coronavirus immediately and be testing everyone at least once a day, he says. We also dont really know what behaviors people are adoptingwho is working from home, who is self-quarantining, who is or isnt washing handsor what effect it might be having. If you want to predict whats going to happen next, you need an accurate picture of whats happening right now.

Its not clear whats going on inside hospitals, either. Ahmer Inam at Pactera Edge, a data and AI consultancy, says prediction tools would be a lot better if public health data wasnt locked away within government agencies as it is in many countries, including the US. This means an AI must lean more heavily on readily available data like online news. By the time the media picks up on a potentially new medical condition, it is already too late, he says.

But if AI needs much more data from reliable sources to be useful in this area, strategies for getting it can be controversial. Several people I spoke to highlighted this uncomfortable trade-off: to get better predictions from machine learning, we need to share more of our personal data with companies and governments.

Darren Schulte, an MD and CEO of Apixio, which has built an AI to extract information from patients records, thinks that medical records from across the US should be opened up for data analysis. This could allow an AI to automatically identify individuals who are most at risk from Covid-19 because of an underlying condition. Resources could then be focused on those people who need them most. The technology to read patient records and extract life-saving information exists, says Schulte. The problem is that these records are split across multiple databases and managed by different health services, which makes them harder to analyze. Id like to drop my AI into this big ocean of data, he says. But our data sits in small lakes, not a big ocean.

Health data should also be shared between countries, says Inam: Viruses dont operate within the confines of geopolitical boundaries. He thinks countries should be forced by international agreement to release real-time data on diagnoses and hospital admissions, which could then be fed into global-scale machine-learning models of a pandemic.

Of course, this may be wishful thinking. Different parts of the world have different privacy regulations for medical data. And many of us already balk at making our data accessible to third parties. New data-processing techniques, such as differential privacy and training on synthetic data rather than real data, might offer a way through this debate. But this technology is still being finessed. Finding agreement on international standards will take even more time.

For now, we must make the most of what data we have. Wangs answer is to make sure humans are around to interpret what machine-learning models spit out, making sure to discard predictions that dont ring true. If one is overly optimistic or reliant on a fully autonomous predictive model, it will prove problematic, he says. AIs can find hidden signals in the data, but humans must connect the dots.

Early diagnosis

As well as predicting the course of an epidemic, many hope that AI will help identify people who have been infected. AI has a proven track record here. Machine-learning models for examining medical images can catch early signs of disease that human doctors miss, from eye disease to heart conditions to cancer. But these models typically require a lot of data to learn from.

A handful of preprint papers have been posted online in the last few weeks suggesting that machine learning can diagnose Covid-19 from CT scans of lung tissue if trained to spot telltale signs of the disease in the images. Alexander Selvikvg Lundervold at the Western Norway University of Applied Sciences in Bergen, Norway, who is an expert on machine learning and medical imaging, says we should expect AI to be able to detect signs of Covid-19 in patients eventually. But it is unclear whether imaging is the way to go. For one thing, physical signs of the disease may not show up in scans until some time after infection, making it not very useful as an early diagnostic.

AP Images

Whats more, since so little training data is available so far, its hard to assess the accuracy of the approaches posted online. Most image recognition systemsincluding those trained on medical imagesare adapted from models first trained on ImageNet, a widely used data set encompassing millions of everyday images. To classify something simple that's close to ImageNet data, such as images of dogs and cats, can be done with very little data, says Lundervold. Subtle findings in medical images, not so much.

Thats not to say it wont happenand AI tools could potentially be built to detect early stages of disease in future outbreaks. But we should be skeptical about many of the claims of AI doctors diagnosing Covid-19 today. Again, sharing more patient data will help, and so will machine-learning techniques that allow models to be trained even when little data is available. For example, few-shot learning, where an AI can learn patterns from only a handful of results, and transfer learning, where an AI already trained to do one thing can be quickly adapted to do something similar, are promising advancesbut still works in progress.

Cure-all

Data is also essential if AI is to help develop treatments for the disease. One technique for identifying possible drug candidates is to use generative design algorithms, which produce a vast number of potential results and then sift through them to highlight those that are worth looking at more closely. This technique can be used to quickly search through millions of biological or molecular structures, for example.

SRI International is collaborating on such an AI tool, which uses deep learning to generate many novel drug candidates that scientists can then assess for efficacy. This is a game-changer for drug discovery, but it can still take many months before a promising candidate becomes a viable treatment.

In theory, AIs could be used to predict the evolution of the coronavirus too. Inam imagines running unsupervised learning algorithms to simulate all possible evolution paths. You could then add potential vaccines to the mix and see if the viruses mutate to develop resistance. This will allow virologists to be a few steps ahead of the viruses and create vaccines in case any of these doomsday mutations occur, he says.

Its an exciting possibility, but a far-off one. We dont yet have enough information about how the virus mutates to be able to simulate it this time around.

In the meantime, the ultimate barrier may be the people in charge. What Id most like to change is the relationship between policymakers and AI, says Wang. AI will not be able to predict disease outbreaks by itself, no matter how much data it gets. Getting leaders in government, businesses, and health care to trust these tools will fundamentally change how quickly we can react to disease outbreaks, he says. But that trust needs to come from a realistic view of what AI can and cannot do nowand what might make it better next time.

Making the most of AI will take a lot of data, time, and smart coordination between many different people. All of which are in short supply right now.

Continued here:
AI could help with the next pandemicbut not with this one - MIT Technology Review

The Impact of Python: How It Could Rule the AI World? – insideBIGDATA

Holdyour head up high! The rise of artificial intelligence (AI) and machinelearning (ML) are poised to bring a new era of civilization and not destroythem.

Yet,theres fear that technology will displace the current workers or tasks, andthats partly true. As predicted by researches, the speed at which AI isreplacing jobs is bound to skyrocket, impacting the jobs of several workerssuch as factory workers, accountants, radiologists, paralegal, and truckers.

Shufflingand transformation of jobs around the workforce are being witnessed, thanks tothe technological epoch.

Buthey, were still far from Terminator.

What can be the odds?

The fear is good, perhaps it is only a matter of time before AI and automation will replace the jobs of millions of tech professionals. A 2018 report by the World Economic Forum suggested that around 75 million jobs will be displaced due to automation and AI in the next five years. The good news is, despite these many jobs will be replaced, at the same time, there will also be a creation of 133 million newer job roles for AI engineers and AI experts.

Simplysaid, within the next five years, there will be near about 58 million newer jobroles in the field of AI.

Insteadof worrying about AI and automation stealing your job, you should beconsidering how you need to reshape your career.

AI and ML in theworkplace: How prepared are you for the impact?

AIand machine learning projects are now leading every industry and sector intothe future of technological advancements. The question is, what are the bestways for you to bring these experiences into reality? What are the programminglanguages that can be used for machine learning and AI?

Thinkahead, you can start by considering Python for machine learning and AI.

But why Python?

Python is the foundational language for AI. However, the projects do differ from a traditional software project, thus, it is necessary to dive deeper into the subject. The crux of building an AI career is by learning Python a programming language that is loved by all because it is both stable and flexible. It is now widely used for machine learning applications and why not, it has become one of the best choices across industries.

Over here, we will list down why Python is the most preferred programming language by AI experts today:

Huge bundle of libraries/frameworks

Itis often a tricky task to choose what best fits while running an ML or an AIalgorithm. It is crucial to have the right set of libraries, a well-structuredenvironment for developers to come up with the best coding solution.

Toease their development timings, most developers rely on Python libraries andframeworks. In a software library, there are already pre-written codes that thedevelopers look up to solve programming challenges. This is where Pythonspre-existing extensive set of libraries play a major role in providing themwith the set of libraries and frameworks to choose from. To name a few are:

With these solutions, it gets easier for the developer to develop your product faster. Even so, the development team needs to waste time finding the libraries that will best suit their project. They can always use an existing library for the implementation of further changes.

Holds a strong community and wide popularity

Accordingto a developer survey Stack Overflow (2018), Python was seen to be among thetop most popular programming language amongst developers. This simply means,for every job that you seek in the job market, AI will always be one of theskillsets that they will look to hire for.

Itis also seen that there are nearly more than 140,000 online repositories thathave custom-built software packages of Python. For instance, Python librariessuch as SciPy, NumPy, and Matplotlib can easily be installed in a program thatruns on Python.

Pythonwas pointed out to be 2019s 8th fastest growing programminglanguage with a growth rate of 151% year on year.

Now, these packages used in machine learning helps AI engineers detect patterns from a large dataset. Pythons popularity is widely known that even Google uses this language to crawl web pages. Pixar, an animation studio uses it to produce movies. Surprisingly, even Spotify uses Python for song recommendation.

Within the past years, Python has managed to grow its community worldwide. You can find multiple platforms and forums where machine learning solutions are shared. For every problem, youve faced youll always find theres already someone who has been through with the same problem. Thus, it is easy to find solutions and guidance through this community.

Platform-independent

This simply means, a programming language or a framework allows developers to implement things on a single machine learning, and the same can be used on another machine learning without further changing anything. The best factor about Python is that it is a language that is platform-independent and is supported by several other platforms such as Windows, macOS, and Linux.

Python code can itself create a standalone program that is executable in most operating systems without even needing a Python interpreter.

Simple and most loved programming language

Python is said to be the simplest and the most consistent programming language offering readable code. While there are complex algorithms that stand along with machine learning, Pythons concise and easy readability allows AI professionals to write easy systems that are reliable. This allows the developers to solve complex machine learning problems instead of dealing with technical issues of the language.

Sofar Python is projected to be the only language that is easy for developers tolearn. Some say Python is intuitive as compared to other programming languages.While others believe, it is due to the number of libraries Python offers thatmakes it suitable for all developers to use.

In conclusion

Pythons power and ease of use has catapulted it to become one of the core languages to provide machine learning solutions. Moreover, AI and ML have been the biggest innovation so far ever since the launch of microchip, developing a career in this realm will pave a way toward the future of tomorrow.

About the Author

Michael Lyam is a writer, AI researcher, business strategist, and top contributor on Medium. He is passionate about technology and is inspired to find new ways to create captivating content. Michaels areas of expertise are: AI, machine learning, data science, and business strategy.

Sign up for the free insideBIGDATAnewsletter.

The rest is here:
The Impact of Python: How It Could Rule the AI World? - insideBIGDATA

AI Is Coming for Your Most Mind-Numbing Office Tasks – WIRED

In 2018, the New York Foundling, a charity that offers child welfare, adoption, and mental health services, was stuck in cut-and-paste hell.

Clinicians and admin staff were spending hours transferring text between different documents and databases to meet varied legal requirements. Arik Hill, the charitys chief information officer, blames the data entry drudgery for an annual staff turnover of 42 percent at the time. We are not a very glamorous industry, says Hill. We are really only just moving on from paper clinical records.

Since then, the New York Foundling has automated much of this grunt work using what are known as software robotssimple programs hand-crafted to perform dull tasks. Often, the programs are built by recording and mimicking a users keystrokes, such as copying a field of text from one database and pasting it into another, eliminating hours of repetitive-stress-inducing work.

It was mind-blowing, says Hill, who says turnover has fallen to 17 percent.

To automate the work, the New York Foundling got help from UiPath, a so-called robotic process automation company. That project didnt require any real machine intelligence.

But in January, UiPath began upgrading its army of software bots to use powerful new artificial intelligence algorithms. It thinks this will let them take on more complex and challenging tasks, such as transcription or sorting images, across more offices. Ultimately, the company hopes software robots will gradually learn how to automate repetitive work for themselves.

In other words, if artificial intelligence is going to disrupt white-collar work, then this may be how it begins.

When paired with robotic process automation, AI significantly expands the number and types of tasks that software robots can perform, says Tom Davenport, a professor who studies information technology and management at Babson College.

Consider a company that needs to summarize long-winded, handwritten notes. AI algorithms that perform character recognition and natural language processing could read the cursive and summarize the text, before a software robot inputs the text into, say, a website. The latest version of UiPaths software includes a range of off-the-shelf machine learning tools. It is also now possible for users to add their own machine learning models to a robotic process.

With all the AI hype, its notable that so little has found its way into modern offices. But the automation that is there, which simply repeats a persons clicking and typing, is still useful. The technology is mostly used by banks, telcos, insurers, and other companies with legacy systems; market researcher Gartner estimates the industry generated roughly $1.3 billion in revenue in 2019.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

Simple software automation is eliminating some particularly repetitive jobs, such as basic data entry, which are often already done overseas. In call centers, fewer people are needed to fill out forms if software can be programmed to open the right documents, find the right fields, and enter text. At the New York Foundling, Hills software allowed him to redirect eight workers to other tasks.

But Davenport says software robots that use AI could displace more jobs, especially if we head into a recession. Companies will use it for substantial headcount and cost reductions, he says.

Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and the author of several books exploring the impact of technology on the workforce, says robotic process automation will mostly affect middle-skilled office workers, meaning admin work that requires some training.

But it wont happen overnight. He says it took many years for simple software robots, which are essentially descended from screen-scrapers and simple coding tools, to affect office work. The lesson is just how long it takes for even a relatively simple technology to have an impact on business, because of the hard work it takes to implement it reliably in complex environments, Brynjolfsson notes.

Originally posted here:
AI Is Coming for Your Most Mind-Numbing Office Tasks - WIRED

This is how the CDC is trying to forecast coronaviruss spread – MIT Technology Review

Every year the US Centers for Disease Control and Prevention holds a competition to see who can accurately forecast the flu. Research teams around the country vie with different methods, and the best performers win funding and a partnership with the agency to improve the nations preparation for the next season.

Now the agency is tapping several dozen teams to adapt their techniques to forecast the spread of the coronavirus in an effort to make more informed decisions. Among them is a group at Carnegie Mellon University that, over the last five years, has consistently achieved some of the best results. Last year, the group was designated one of two National Centers of Excellence for Influenza Forecasting and asked to lead the design of a community-wide forecasting process.

Roni Rosenfeld, head of the group and of CMUs machine-learning department, admits he was initially reluctant to take on the coronavirus predictions. To a layperson, it doesnt seem as if forecasting the two diseases should be so different, but doing so for the novel outbreak is significantly harder. Rosenfeld worried about whether his predictions would be accurateand, thus, whether they would even be useful. In the end, he was convinced to forge ahead anyway.

People act on the basis of forecasting models, whether they are on paper or in their heads, he says. Youre better off quantifying these estimations so you can discuss them rationally as opposed to making them based on intuition.

Sign up for The Download your daily dose of what's up in emerging technology

The lab uses three methods to pinpoint the rise and fall of cases during flu season. The first is whats known as a nowcasta prediction of the current number of people infected. The lab gathers recent and historical data from the CDC and other partner organizations, including flu-related Google searches, Twitter activity, and web traffic on the CDC, medical sites, and Wikipedia. Those data streams are then fed into machine-learning algorithms to make predictions in real time.

The second and third are both proper forecastsa prediction of whats to come. One is based on machine learning and the other on crowdsourced opinion. Predictions include trends expected up to four weeks ahead, as well as important milestones like when the season will peak and the maximum number of expected cases. Such information helps both the CDC and health-care providers ramp up capacity and prepare in advance.

The machine-learning forecast takes into account the nowcast as well as additional historical data from the CDC. There are 20 years of robust data on flu seasons in the US, providing ample fodder for the algorithms.

In contrast, the crowdsourcing method taps into a group of volunteers. Every week, experts and non-expertswho are found to do just as well with a little participation experienceare asked to log on to an online system and review a chart showing the trajectory of past and current flu seasons. They are then asked to complete the current seasons curve, projecting how many more flu cases there will be over time. Though people dont make very good predictions individually, in aggregate they are often just as good as the machine-learning forecast.

Carnegie Mellon University

Over the years, Rosenfelds team has fine-tuned each of its methods to predict the trajectory of the flu with near-perfect accuracy. At the end of each flu season, the CDC always retroactively updates final numbers, giving the CMU lab a chance to see how their projections stack up. The researchers are now adapting all the techniques for Covid-19, but each will pose distinct challenges.

For the machine-learning- based nowcast, many of the data sources will be the same, but the prediction model will be different. The algorithms will need to learn new correlations between the signals in the data and the ground truth. One reason: theres far greater panic around coronavirus, which causes a completely different pattern of online activity. People will look for coronavirus-related information at much higher rates, even if they feel fine, making it more difficult to tell who may already have symptoms.

In a pandemic situation, there is also very little historical data, which will affect both forecasts. The flu happens on a highly regular cycle each year, while pandemics are erratic and rare. The last pandemicH1N1 in 2009also had very different characteristics, primarily affecting younger rather than elderly populations. The Covid-19 outbreak has been precisely the opposite, with older patients facing the highest risk. On top of that, the surveillance systems for tracking cases werent fully developed back then.

Thats the part that I think is going to be the most challenging, says Rosenfeld, because machine-learning systems, in their nature, learn from examples. Hes hopeful that the crowdsourcing method may be more resilient. On the one hand, little is known about how it will fare in pandemic forecasting. On the other hand, people are actually quite good at adjusting to novel circumstances, he says.

Rosenfelds team is now actively working on ways to make these predictions as good as possible. Flu-testing labs are already beginning to transition to Covid-19 testing and reporting results to the CDC. The CMU lab is also reaching out to other organizations to get as much rich and accurate data as possiblethings like anonymized, aggregated statistics from electronic health records and purchasing patterns for anti-fever medicationto find sharper signals to train its algorithms.

To compensate for the lack of historical data from previous pandemics, the team is relying on older data from the current pandemic. Its looking to incorporate data from countries that were hit earlier and will update its machine-learning models as more accurate data is retroactively posted. At the end of every week, the lab will get a report from the CDC with the most up-to-date trajectory of cases in the US, including revisions on numbers from previous weeks. The lab will then revise its models to close the gaps between the original predictions and the rolling statistics.

Rosenfeld worries about the limitations of these forecasts. There is far more uncertainty than what hes usually comfortable with: for every prediction the lab provides to the CDC, it will include a range of possibilities. We're not going to tell you what's going to happen, he says. What we tell you is what are the things that can happen and how likely is each one of them.

Even after the pandemic is over, the uncertainty wont go away. It will be very difficult to tell how good our methods are, he says. You could be accurate for the wrong reasons. You could be inaccurate for the wrong reasons. Because you have only one season to test it on, you cant really draw any strong, robust conclusions about your methodology.

But in spite of all these challenges, Rosenfeld believes the work will be worthwhile in informing the CDC and improving the agencys preparation. I can do the best I can now, he says. Its better than not having anything.

See the original post here:
This is how the CDC is trying to forecast coronaviruss spread - MIT Technology Review

Chelsea Manning Is Ordered Released From Jail – The New York Times

WASHINGTON A federal judge on Thursday ordered the release of Chelsea Manning, the former Army intelligence analyst who in 2010 leaked archives of military and diplomatic documents to WikiLeaks, and who was jailed last year for refusing to testify before a grand jury that is investigating the organization and its founder, Julian Assange.

The release came one day after Ms. Manning tried to kill herself and was hospitalized, according to her lawyers.

In a brief opinion, a Federal District Court judge overseeing the matter, Anthony J. Trenga, said that he also dismissed on Thursday the grand jury that Ms. Manning was refusing to testify before after finding that its business had concluded.

The court finds that Ms. Mannings appearance before the grand jury is no longer needed, in light of which her detention no longer serves any coercive purpose, Judge Trenga wrote.

However, he said, Ms. Manning would still have to pay $256,000 in fines for her defiance of the subpoena. The judge wrote that enforcement of the accrued, conditional fines would not be punitive but rather necessary to the coercive purpose of the courts civil contempt order.

Ms. Manning was originally jailed a year ago for contempt of court after initially refusing to testify about WikiLeaks and Mr. Assange, but was briefly released when the first grand jury expired. Prosecutors then obtained a new subpoena, and she was locked up again for defying it in May. The moves raise the possibility that prosecutors could start over a third time.

But supporters of Ms. Manning had believed that the grand jury was not set to terminate on March 12, raising the prospect that prosecutors and the judge decided to shut it down early to bring the matter to a close.

It is my devout hope that she is released to us shortly, and that she is finally given a meaningful opportunity to rest and heal that she so richly deserves, said her lawyer, Moira Meltzer-Cohen.

Joshua Stueve, a spokesman for the office of the U.S. attorney for the Eastern District of Virginia, declined to comment.

The archives that Ms. Manning provided to WikiLeaks in 2010, when she was an Army intelligence analyst posted in Iraq, helped vault the antisecrecy organization and Mr. Assange to global fame. The events took place years before their image and actions evolved with the publication of Democratic emails stolen by Russian hackers during the 2016 election.

Ms. Manning admitted sending the files to WikiLeaks in a court-martial trial. She also confessed to interacting online with someone who was probably Mr. Assange, but she said she had acted on principle and was not working for WikiLeaks.

Testimony showed that she had been deteriorating, mentally and emotionally, during the period when she downloaded the documents and sent them to WikiLeaks. Then known as Pfc. Bradley Manning, she was struggling with gender dysphoria under conditions of extraordinary stress and isolation while deployed to the Iraq war zone.

She was sentenced to 35 years in prison the longest sentence by far in an American leak case. After her conviction, she changed her name to Chelsea and announced that she wanted to undergo gender transition, but was housed in a male military prison and twice tried to commit suicide in 2016.

In January 2017, President Barack Obama commuted most of the remainder of her sentence shortly before he left office. But she was swept back up into legal trouble last year when prosecutors investigating Mr. Assange subpoenaed her to testify before a grand jury about their interactions.

Although prosecutors granted immunity for her testimony, Ms. Manning had vowed not to cooperate in the investigation, saying she had ethical objections, and she was placed in civil detention for contempt of court.

Separately last year, the Justice Department unsealed criminal charges against Mr. Assange, who was living in the Ecuadorean Embassy in London. Prosecutors initially charged him with a narrow hacking conspiracy offense, accusing him of agreeing to try to help Ms. Manning crack a password that would have let her log onto a military computer system under a different user account, covering her tracks.

Read more here:
Chelsea Manning Is Ordered Released From Jail - The New York Times

Army Project Touts New Error Correction Method That May be Key Step Toward Quantum Computing – HPCwire

RESEARCH TRIANGLE PARK, N.C., March 12, 2020 An Army project devised a novel approach for quantum error correction that could provide a key step toward practical quantum computers, sensors and distributed quantum information that would enable the military to potentially solve previously intractable problems or deploy sensors with higher magnetic and electric field sensitivities.

The approach, developed by researchers at Massachusetts Institute of Technology with Army funding, could mitigate certain types of the random fluctuations, or noise, that are a longstanding barrier to quantum computing. These random fluctuations can eradicate the data stored in such devices.

The Army-funded research, published in Physical Review Letters, involves identifying the kinds of noise that are the most likely, rather than casting a broad net to try to catch all possible sources of disturbance.

The team learned that we can reduce the overhead for certain types of error correction on small scale quantum systems, said Dr. Sara Gamble, program manager for the Army Research Office, an element of U.S. Army Combat Capabilities Development Commands Army Research Laboratory. This has the potential to enable increased capabilities in targeted quantum information science applications for the DOD.

The specific quantum system the research team is working with consists of carbon nuclei near a particular kind of defect in a diamond crystal called a nitrogen vacancy center. These defects behave like single, isolated electrons, and their presence enables the control of the nearby carbon nuclei.

But the team found that the overwhelming majority of the noise affecting these nuclei came from one single source: random fluctuations in the nearby defects themselves. This noise source can be accurately modeled, and suppressing its effects could have a major impact, as other sources of noise are relatively insignificant.

The team determined that the noise comes from one central defect, or one central electron that has a tendency to hop around at random. It jitters. That jitter, in turn, is felt by all those nearby nuclei, in a predictable way that can be corrected. The ability to apply this targeted correction in a successful way is the central breakthrough of this research.

The work so far is theoretical, but the team is actively working on a lab demonstration of this principle in action.

If the demonstration works as expected, this research could make up an important component of near and far term future quantum-based technologies of various kinds, including quantum computers and sensors.

ARL is pursuing research in silicon vacancy quantum systems which share similarities with the nitrogen vacancy center quantum systems considered by the MIT team. While silicon vacancy and nitrogen vacancy centers have different optical properties and many basic research questions are open regarding which type(s) of application each may be ultimately best suited for, the error correction approach developed here has potential to impact both types of systems and as a result accelerate progress at the lab.

About U.S. Army CCDC Army Research Laboratory

CCDC Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command. As the Armys corporate research laboratory, ARL discovers, innovates and transitions science and technology to ensure dominant strategic land power. Through collaboration across the commands core technical competencies, CCDC leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more lethal to win the nations wars and come home safely. CCDC is a major subordinate command of the U.S. Army Futures Command.

Source: U.S. Army CCDC Army Research Laboratory Public Affairs

View post:
Army Project Touts New Error Correction Method That May be Key Step Toward Quantum Computing - HPCwire

NIST Works on the Industries of the Future in Buildings from the Past – Nextgov

The presidents budget request for fiscal 2021 proposed $738 million to fund the National Institutes of Science and Technology, a dramatic reduction from the more than $1 billion in enacted funds allocated for the agency this fiscal year.

The House Science, Space and Technology Committees Research and Technology Subcommittee on Wednesday held a hearing to hone in on NISTs reauthorizationbut instead of focusing on relevant budget considerations, lawmakers had other plans.

We're disappointed by the president's destructive budget request, which proposes over a 30% cut to NIST programs, Subcommittee Chairwoman Rep. Haley Stevens, D-Mich., said at the top of the hearing. But today, I don't want to dwell on a proposal that we know Congress is going to reject ... today I would like this committee to focus on improving NIST and getting the agency the tools it needs to do better, to do its job.

Per Stevens suggestion, Under Secretary of Commerce for Standards and Technology and NIST Director Walter Copan reflected on some of the agencys dire needs and offered updates and his view on a range of its ongoing programs and efforts.

NISTs Facilities Are in Bad Shape

President Trumps budget proposal for fiscal 2021 requests only $60 million in funds for facility construction, which is down from the $118 million enacted for fiscal 2020 and comes at a time when the agencys workspaces need upgrades.

Indeed the condition of NIST facilities are challenging, Copan explained. Over 55% of NIST's facilities are considered in poor to critical condition per [Commerce Department] standards, and so it does provide some significant challenges for us.

Some of the agencys decades-old facilities and infrastructures are deteriorating and Copan added that hed recently heard NISTs deferred maintenance backlog has hit more than $775 million. If the lawmakers or public venture out to visit some of the agencys facilities, you'll see the good, the bad, and the embarrassingly bad, he said. Those conditions are a testament to the resilience and the commitment of NISTs people, that they can work in sometimes challenging, outdated environments, Copan said.

The director noted that there have already been some creative solutions proposed to address the issue, including the development of a federal capital revolving fund. The agency is also looking creatively at the combination of maintenance with lease options for some of its facilities, in hopes that it can then move more rapidly by having its officials cycle out of laboratories to launch rebuilding and renovation processes.

It's one of my top priorities as the NIST director to have our NIST people work in 21st-century facilities that we can be proud of and that enable the important work of NIST for the nation, Copan said.

Advancing Efforts in Artificial Intelligence and Quantum Computing

The presidents budget request placed a sharp focus on industries of the future, which will be powered by many emerging technologies, and particularly quantum computing and AI.

During the hearing and in his written testimony, Copan highlighted some of NISTs work in both areas. The agency has helped shape an entire generation of quantum science, over the last century, and a significant portion of quantum scientists from around the globe have trained at the agencys facilities. Some of NISTs more recent quantum achievements include supporting the development of a quantum logic clock and helping steer advancements in quantum simulation. Following a recent mandate from the Trump administration, the agency is also in the midst of instituting the Quantum Economic Development Consortium, or QEDC, which aims to advance industry collaboration to expand the nations leadership in quantum research and development.

Looking forward, over the coming years NIST will focus a portion of its quantum research portfolio on the grand challenge of quantum networking, Copans written testimony said. Serving as the basis for secure and highly efficient quantum information transmission that links together multiple quantum devices and sensors, quantum networks will be a key element in the long-term evolution of quantum technologies.

Though there were cuts across many areas, the presidents budget request also proposed a doubling of NISTs funding in artificial intelligence and Copan said the technology is already broadly applied across all of the agencys laboratories to help improve productivity.

Going forward and with increased funding, he laid out some of the agencys top priorities, noting that there's much work to be done in developing tools to provide insights into artificial intelligence programs, and there is also important work to be done in standardization, so that the United States can lead the world in the application of [AI] in a trustworthy and ethical manner.

Standardization to Help the U.S. Lead in 5G

Rep. Frank Lucas, R-Okla., asked Copan to weigh in on the moves China is making across the fifth-generation wireless technology landscape, and the moves the U.S. needs to make to leadnot just competein that specific area.

We have entered in the United States, as we know, a hyper-competitive environment with China as a lead in activities related to standardization, Copan responded.

The director said that officials see, in some ways, that the standardization process has been weaponized, where the free market economy that is represented by the United States, now needs to lead in more effective coordination internally and incentivize industry to participate in the standards process. Though U.S. officials have already seen those rules of fair play bent or indeed broken by other players, NIST and others need to help improve information sharing across American standards-focused stakeholders, which could, in turn, accelerate adoption around the emerging technology.

We want the best technologies in the world to win and we want the United States to continue to be the leader in not only delivering those technologies, but securing the intellectual properties behind them and translating those into market value, he said.

Continued here:
NIST Works on the Industries of the Future in Buildings from the Past - Nextgov

Top AI Announcements Of The Week: TensorFlow Quantum And More – Analytics India Magazine

AI is one of the most happening domains in the world right now. It would take a lifetime to skim through all the machine learning research papers released till date. As the AI keeps itself in the news through new releases of frameworks, regulations and breakthroughs, we can only hope to get the best of the lot.

So, here we have a compiled a list of top exciting AI announcements released over the past one week:

Late last year, Google locked horns with IBM in their race for quantum supremacy. Though the news has been around how good their quantum computers are, not much has been said about the implementation. Today, Google brings two of their most powerful frameworks Tensorflow and CIRQ together and releases TensorFlow Quantum, an open-source library for the rapid prototyping of quantum ML models.

Google AI team has joined hands with the University of Waterloo, X, and Volkswagen, announced the release of TensorFlow Quantum (TFQ).

TFQ is designed to provide the developers with the tools necessary for assisting the quantum computing and machine learning research communities to control and model quantum systems.

The team at Google have also released a TFQ white paper with a review of quantum applications. And, each example can be run in-browser via Colab from this research repository.

A key feature of TensorFlow Quantum is the ability to simultaneously train and execute many quantum circuits. This is achieved by TensorFlows ability to parallelise computation across a cluster of computers, and the ability to simulate relatively large quantum circuits on multi-core computers.

As the devastating news of COVID-19 keeps rising at an alarming rate, the AI researchers have given something to smile about. DeepMind, one of the premier AI research labs in the world, announced last week, that they are releasing structure predictions of several proteins that can promote research into the ongoing research around COVID-19. They have used the latest version of AlphaFold system to find these structures. AlphaFold is one of the biggest innovations to have come from the labs of DeepMind, and after a couple of years, it is exhilarating to see its application in something very critical.

As the pursuit to achieve human-level intelligence in machines fortifies, language modeling will keep on surfacing till the very end. One, human language is innately sophisticated, and two, training language models from scratch is exhaustive.

The last couple of years has witnessed a flurry of mega releases from the likes of NVIDIA, Microsoft and especially Google. As BERT topped the charts through many of its variants, Google now announces ELECTRA.

ELECTRA has the benefits of BERT but more efficient learning. They also claim that this novel pre-training method outperforms existing techniques given the same compute budget.

The gains are particularly strong for small models; for example, a model trained on one GPU for four days outperformed GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark.

China has been the worst-hit nation of all the COVID-19 victims. However, two of the biggest AI breakthroughs have come from the Chinese soil. Last month, Baidu announced how its toolkit brings down the prediction time. Last week, another Chinese giant, Alibaba announced that its new AI system has an accuracy of 96% in detecting the coronavirus from the CT scan of the patients. Alibabas founder Jack Ma has fueled the vaccine development efforts of his team with a $2.15 M donation.

Facebook AI has released its in-house feature of converting a two-dimensional photo into a video byte that gives the feel of having a more realistic view of the object in the picture. This system infers the 3D structure of any image, whether it is a new shot just taken on an Android or iOS device with a standard single camera, or a decades-old image recently uploaded to a phone or laptop.

The feature has been only available on high-end phones through the dual-lens portrait mode. But, now it will be available on every mobile device even with a single, rear-facing camera. To bring this new visual format to more people, the researchers at Facebook used state-of-the-art ML techniques to produce 3D photos from virtually any standard 2D picture.

One significant implication of this feature can be an improved understanding of 3D scenes that can help robots navigate and interact with the physical world.

As the whole world focused on the race to quantum supremacy between Google and IBM, Honeywell silently has been building, as it claims, the most powerful quantum computer yet. And, it plans to release this by the middle of 2020.

Thanks to a breakthrough in technology, were on track to release a quantum computer with a quantum volume of at least 64, twice that of the next alternative in the industry. There are a number of industries that will be profoundly impacted by the advancement and ultimate application of at-scale quantum computing, said Tony Uttley, President of Honeywell Quantum Solutions in the official press release.

The outbreak of COVID-19 has created a panic globally and rightfully so. Many flagship conferences have been either cancelled or have been moved to a virtual environment.

Nvidias flagship GPU Technology Conference (GTC), which was supposed to take place in San Francisco in the last week of March was cancelled due to fears of the COVID-19 coronavirus.

Whereas, Google Cloud also has cancelled its upcoming event, Google Cloud Next 20, which was slated to take place on April 6-8 at the Moscone Center in San Francisco. Due to the growing concern around the coronavirus (COVID-19), and in alignment with the best practices laid out by the CDC, WHO and other relevant entities, Google Cloud has decided to reimagine Google Cloud Next 20, the company stated on its website.

One of the popular conferences for ML researchers, ICLR2020 too, has announced that they are cancelling its physical conference this year due to growing concerns about COVID-19. They are shifting this event to a fully virtual conference.

ICLR authorities also issued a statement saying that all accepted papers at the virtual conference will be presented using a pre-recorded video.

comments

Read more:
Top AI Announcements Of The Week: TensorFlow Quantum And More - Analytics India Magazine

Quantum Computing for Enterprise Market Share opportunities Trends, and Forecasts to 2020-2024 : 1QB Information Technologies, Airbus, Anyon Systems,…

Global Quantum Computing for Enterprise Market Professional Survey 2019 by Manufacturers, Regions, Types and Applications, Forecast to 2024>This report offers a detailed view of market opportunity by end user segments, product segments, sales channels, key countries, and import / export dynamics. It details market size & forecast, growth drivers, emerging trends, market opportunities, and investment risks in over various segments in Quantum Computing for Enterprise industry. It provides a comprehensive understanding of Quantum Computing for Enterprise market dynamics in both value and volume terms.

The key players covered in this study > 1QB Information Technologies, Airbus, Anyon Systems, Cambridge Quantum Computing, D-Wave Systems, Google, Microsoft, IBM, Intel, QC Ware, Quantum, Rigetti Computing, Strangeworks, Zapata Computing.

Get Sample Copy of the Complete Report

This report focuses on the global Quantum Computing for Enterprise status, future forecast, growth opportunity, key market and key players. The study objectives are to present the Quantum Computing for Enterprise development in North America, Europe, China, Japan, Southeast Asia, India and Central & South America.

Table Of Content

1 Report Overview

2 Global Growth Trends

3 Market Share by Key Players

4 Breakdown Data by Type and Application

5 North America

6 Europe

7 China

8 Japan

9 Southeast Asia

10 India

11 Central & South America

12 International Players Profiles

13 Market Forecast 2019-2025

14 Analysts Viewpoints/Conclusions

15 Appendix

This report studies the Quantum Computing for Enterprise market status and outlook of Global and major regions, from angles of players, countries, product types and end industries; this report analyzes the top players in global market, and splits the Quantum Computing for Enterprise market by product type and applications/end industries.

Customization of this Report: This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs. For more relevant reports visitwww.reportsandmarkets.com

What to Expect From This Report on Quantum Computing for Enterprise Market:

The developmental plans for your business based on the value of the cost of the production and value of the products, and more for the coming years.

A detailed overview of regional distributions of popular products in the Quantum Computing for Enterprise Market.

How do the major companies and mid-level manufacturers make a profit within the Quantum Computing for Enterprise Market?

Estimate the break-in for new players to enter the Quantum Computing for Enterprise Market.

Comprehensive research on the overall expansion within the Quantum Computing for Enterprise Market for deciding the product launch and asset developments.

If U Know More about This Report

Any special requirements about this report, please let us know and we can provide custom report.

About Us:

Market research is the new buzzword in the market, which helps in understanding the market potential of any product in the market. Reports And Markets is not just another company in this domain but is a part of a veteran group called Algoro Research Consultants Pvt. Ltd. It offers premium progressive statistical surveying, market research reports, analysis & forecast data for a wide range of sectors both for the government and private agencies all across the world.

For more detailed information please contact us at:

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Ph: +1-352-353-0818 (US)

See original here:
Quantum Computing for Enterprise Market Share opportunities Trends, and Forecasts to 2020-2024 : 1QB Information Technologies, Airbus, Anyon Systems,...

We May Be Living in a Simulation, but the Truth Still Matters – The New York Times

Wednesday night, in no particular order in the space of an hour: The N.B.A. suspended its season. Tom Hanks announced that he and his wife have the coronavirus. President Trump, who had spent time hate-tweeting Vanity Fair magazine earlier in the day, banned travel from Europe. And, of course, the former vice-presidential candidate Sarah Palin, wearing a pink, fluffy bear outfit, sang Sir Mix-A-Lots Baby Got Back on The Masked Singer. Correction: Badly sang it.

In perhaps the most accurate assessment of the night, Josh Jordan tweeted: We are living in a simulation and it has collapsed on itself.

I do not believe in the simulation hypothesis, which he is joking about here. For those not familiar, it posits that what we think of as reality is not actually real. Instead, we are living in a complex simulation that was probably created by a supercomputer, invented by an obviously superior being.

Everythings fake news, if you will, or really just designed as a giant video game to amuse what would have to be the brainiest teenagers who ever lived.

Crazy, right?

But while most people think they actually do exist, wouldnt it be nice to have a blame-free explanation to cope with the freak show that has become our country and the world? (I vote yes, even if some quantum computer just made me type that.)

It would be, which is why the idea of the simulation hypothesis has been a long-running, sort-of joke among some of Silicon Valleys top players, some of whom take it more seriously than you might imagine.

Some background: While the basic idea around the simulation hypothesis really goes back to philosophers like Descartes, we got a look-see at this tech-heavy idea in the 1999 movie The Matrix.

In the film, Keanu Reevess character, Neo, is jarred out of his anodyne existence to find that he has been living, unaware, in a virtual world in which the energy from his body, and everyone elses, is used as fuel for the giant computer. Neos body is literally jacked with all kinds of scary-looking plugs, and he finally becomes powerful enough to wave his hands around real fast and break the bad guys into itty-bitty bytes.

The idea that were all living in a simulation took off big time among tech folks in 2003 when Oxford Universitys big thinker of the future, Nick Bostrom, wrote a paper on the subject. He focused on the likely amazing computing abilities of advanced civilizations and the fact that it is not too crazy to imagine that the devices they make could simulate human consciousness.

So why not do that to run what Mr. Bostrom called the ancestor simulation game? The ancestors, by the way, are us.

My mind was blown again a few years later on the topic. During an interview that Walt Mossberg and I did in 2016 with the tech entrepreneur Elon Musk, an audience member asked Mr. Musk what he thought of the idea. As it turned out, he had thought a lot about it, saying that he had had so many simulation discussions its crazy.

Which was not to say the discussions were crazy. In fact, Mr. Musk quickly made the case that video game development had become so sophisticated that it was indistinguishable from reality.

And, as to that base reality we think we are living in? Not so much, said Mr. Musk. In fact, he insisted this was a good thing, arguing that either were going to create simulations that are indistinguishable from reality or civilization will cease to exist. Those are the two options.

Oh my.

I would like to tell you that was not the last time I heard that formulation, or one like it, from the tech moguls I have covered. The Zappos founder Tony Hsieh once told me we were in one after we did an interview, as we were exiting the stage. I think he was kidding, but he also went over why it might be so and why it was important to bend your mind to consider the possibility.

After hearing the simulation idea so many times, I started to figure out that it was less about the idea that none of this is real. Instead, these tech inventors used it more to explain, inspire and even to force innovation, rather than to negate reality and its inherently hopeless messiness. In fact, it was freeing.

At least that is my take, giving me something that I could like about them, since there was so much not to like.

To my mind, tech leaders do not use the simulation hypothesis as an excuse to do whatever they want. Theyre not positing that nothing matters because none of this is happening. Instead, it allows them to hold out the possibility that this game could also change for the better rather than the worse. And, perhaps, we as pawns have some influence on that outcome too and could turn our story into a better one.

Perhaps this optimism was manifesting in the hopeful news that the Cleveland Clinic may have come up with a faster test for the coronavirus. Or that Dr. Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases and a key member of the coronavirus task force, exists as a scientific superhero to counter all the bad information that is spewed out to vulnerable citizens like my own mother by outlets like Fox News.

In fact, it felt like a minor miracle when the tireless Dr. Fauci popped up on Sean Hannitys show this week to kindly school him on his irresponsible downplaying and deep-state conspiracy mongering of the health crisis. Pushing back on the specious claim that the coronavirus is just like the flu a notion also promoted by Mr. Trump Dr. Fauci said, Its 10 times more lethal than the seasonal flu, to a temporarily speechless Mr. Hannity. You got to make sure that people understand that!

I sure have Dr. Fauci to thank for saying that, which he repeated in congressional testimony too. In all this mess, it felt like a positive turn in the game. But just in case a game it is, Ill also raise a simulated glass to those teenagers somewhere out there pushing all the buttons to make it so. Not so much for Sarah Palins singing, but Ill take that too.

See original here:
We May Be Living in a Simulation, but the Truth Still Matters - The New York Times