Julian Assange was ‘handcuffed 11 times and stripped naked …

Julian Assange was handcuffed 11 times, stripped naked twice and had his case files confiscated after the first day of his extradition hearing, according to his lawyers, who complained of interference in his ability to take part.

Their appeal to the judge overseeing the trial at Woolwich crown court in south-east London was also supported by legal counsel for the US government, who said it was essential the WikiLeaks founder be given a fair trial.

Edward Fitzgerald QC, acting for Assange, said the case files, which the prisoner was reading in court on Monday, were confiscated by guards when he returned to prison later that night and that he was put in five cells.

The judge, Vanessa Baraitser, replied that she did not have the legal power to comment or rule on Assanges conditions but encouraged the defence team to formally raise the matter with the prison.

The details emerged on the second day of Assanges extradition hearing, during which his legal team denied that he had knowingly placed lives at risk by publishing unredacted US government files.

The court was told Wikileaks had entered into a collaboration with the Guardian, El Pas, the New York Times and other media outlets to make redactions to 250,000 leaked cables in 2010 and published them.

Mark Summers, QC, claimed the unredacted files had been published because a password to this material had appeared in a Guardian book on the affair. The gates got opened not by Assange or WikiLeaks but by another member of that partnership, he said.

The Guardian denied the claim.

The Guardian has made clear it is opposed to the extradition of Julian Assange. However, it is entirely wrong to say the Guardians 2011 Wikileaks book led to the publication of unredacted US government files, a spokesman said.

The book contained a password which the authors had been told by Julian Assange was temporary and would expire and be deleted in a matter of hours. The book also contained no details about the whereabouts of the files. No concerns were expressed by Assange or Wikileaks about security being compromised when the book was published in February 2011. Wikileaks published the unredacted files in September 2011.

The Guardians former investigations editor David Leigh, who wrote the book with Luke Harding, said: Its a complete invention that I had anything to do with Julian Assanges own publication decisions. His cause is not helped by people making things up.

Assange, 48, is wanted in the US to face 18 charges of attempted hacking and breaches of the Espionage Act. They relate to the publication a decade ago of hundreds of thousands of diplomatic cables and files covering areas including US activities in Afghanistan and Iraq.

The Australian, who could face a 175-year prison sentence if found guilty, is accused of working with the former US army intelligence analyst Chelsea Manning to leak classified documents.

As well as rejecting allegations that Assange had put the lives of US sources in danger, much of the hearing was taken up with defence counter arguments to the US case that he helped the former intelligence analyst Chelsea Manning to crack a scrambled password stored on US Department of Defense computers in order to continue sending leaked material to Wikileaks.

You can accurately describe this chapter of the case as lies, lies and more lies, Summers told the court at the outset of the day.

Manning already had access to the information and did not need to decode the scrambled password, or hash value. Nor could she have done so, as is alleged, in order to gain someone elses password, because access to the system was recorded on the basis of IP addresses, Summers says.

As for the US contention that Assange had solicited leaks from Manning, a whistleblower who served more than six years of a 35-year military prison sentence before it was commuted by Barack Obama, Summers drew on Mannings insistence that she was moved by her conscience.

James Lewis QC responded for the US government by accusing the defence of consistently misrepresenting the US indictment of Assange, adding: What he [Summers] is trying to do is consistently put up a straw man and then knock it down.

For example, on the question of cracking the password hash, he emphasised that the US was making a general allegation that doing so would make it more difficult for the authorities to identify the source of the leaks.

Lewis rejected claims made on Monday by the defence that the US had deliberately ratcheted up the charges against Assange in response to the fact that Swedish authorities announced in May 2019 their intention to reopen the investigation of Assange for alleged sexual offences and issue a European arrest warrant.

The inference that charging Mr Assange with publishing the names of sources was simply ratcheting up the charges is defeated by the objective facts that the [US] grand jury found and indicted him on, he said.

It just does not follow we will ratchet up the charges in case there might be a competition. We have a clear unequivocal and legal basis for charging him and that is the end of it.

The hearing continues.

Visit link:
Julian Assange was 'handcuffed 11 times and stripped naked ...

UK govt won’t release Assange amid virus – The Canberra Times

news, latest-news

WikiLeaks founder Julian Assange isn't eligible to be temporarily released from jail as part of the UK government's plan to mitigate coronavirus in prisons. There are now 88 prisoners and 15 staff who have tested positive for COVID-19 in the country and more than a quarter of prison staff are absent or self-isolating due to the pandemic. Justice Secretary Robert Buckland has announced that selected low-risk offenders, who are within weeks of their release dates, will be GPS-tagged and temporarily freed to ease pressure on the National Health Service. "This government is committed to ensuring that justice is served to those who break the law," he said in a statement on Saturday. "But this is an unprecedented situation because if coronavirus takes hold in our prisons, the NHS could be overwhelmed and more lives put at risk." The Ministry of Justice confirmed with AAP that Julian Assange, who is being held on remand in Belmarsh prison, will not be temporarily released because he's not serving a custodial sentence and therefore not eligible. The government is also working to expedite sentencing hearings for those on remand to reduce crowding in jails, but the Australian won't be affected by that measure either. The WikiLeaks founder is only one week into his four-week US extradition hearing and at this stage it's uncertain whether it will resume as planned at Woolwich Crown Court on May 18. Assange's next procedural hearing is set for the Westminster Magistrates Court on Tuesday. He applied for bail last week with his lawyers citing concerns about the risk of coronavirus but he was knocked back by District Judge Vanessa Baraister. In her ruling, she said the Australian had skipped bail in the past and taken refuge in the Ecuadorian embassy in London for almost seven years making him a flight risk. The US government is trying to extradite Assange to face 17 charges of violating the Espionage Act and one of conspiring to commit computer intrusion over the leaking and publishing of thousands of classified US diplomatic and military files. Some of those files revealed alleged US war crimes in Iraq and Afghanistan. The US charges carry a total sentence of 175 years' imprisonment. Australian Associated Press

https://nnimgt-a.akamaihd.net/transform/v1/crop/frm/silverstone-feed-data/87a0b655-86bb-42ad-a2f4-2da5076d1c27.jpg/r0_74_800_526_w1200_h678_fmax.jpg

WikiLeaks founder Julian Assange isn't eligible to be temporarily released from jail as part of the UK government's plan to mitigate coronavirus in prisons.

There are now 88 prisoners and 15 staff who have tested positive for COVID-19 in the country and more than a quarter of prison staff are absent or self-isolating due to the pandemic.

Justice Secretary Robert Buckland has announced that selected low-risk offenders, who are within weeks of their release dates, will be GPS-tagged and temporarily freed to ease pressure on the National Health Service.

"This government is committed to ensuring that justice is served to those who break the law," he said in a statement on Saturday.

"But this is an unprecedented situation because if coronavirus takes hold in our prisons, the NHS could be overwhelmed and more lives put at risk."

The Ministry of Justice confirmed with AAP that Julian Assange, who is being held on remand in Belmarsh prison, will not be temporarily released because he's not serving a custodial sentence and therefore not eligible.

The government is also working to expedite sentencing hearings for those on remand to reduce crowding in jails, but the Australian won't be affected by that measure either.

The WikiLeaks founder is only one week into his four-week US extradition hearing and at this stage it's uncertain whether it will resume as planned at Woolwich Crown Court on May 18.

Assange's next procedural hearing is set for the Westminster Magistrates Court on Tuesday.

He applied for bail last week with his lawyers citing concerns about the risk of coronavirus but he was knocked back by District Judge Vanessa Baraister.

In her ruling, she said the Australian had skipped bail in the past and taken refuge in the Ecuadorian embassy in London for almost seven years making him a flight risk.

The US government is trying to extradite Assange to face 17 charges of violating the Espionage Act and one of conspiring to commit computer intrusion over the leaking and publishing of thousands of classified US diplomatic and military files.

Some of those files revealed alleged US war crimes in Iraq and Afghanistan.

The US charges carry a total sentence of 175 years' imprisonment.

Australian Associated Press

Go here to see the original:
UK govt won't release Assange amid virus - The Canberra Times

Self-supervised learning is the future of AI – The Next Web

Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

Read the rest here:
Self-supervised learning is the future of AI - The Next Web

University of Cambridge researchers develop machine learning app to collect the sounds of Covid-19 – Cambridge Independent

University of Cambridge researchers have developed a machine learning app to collect the sounds of Covid-19

Researchers at the University of Cambridge have developed an app that will collect the sounds of Covid-19.

The Covid-19 Sounds App will be used to gain data to develop machine learning algorithms that could automatically detect whether a person is suffering from the disease.

It would be based on the sound of their voice, their breathing and coughing.

Theres still so much we dont know about this virus and the illness it causes, and in a pandemic situation like the one were currently in, the more reliable information you can get, the better, said Professor Cecilia Mascolo from Cambridges department of computer science and technology, who led the development of the app.

Being a respiratory condition, the sounds made by people with the condition, including voice, breathing and cough sounds, are very specific.

A large, crowdsourced data set will be useful in developing machine learning algorithms that could be used for automatic detection of the condition.

The app collects basic demographic and medical information from users, as well as spoken voice samples, breathing and coaching samples through the phones microphone.

It will ask users if they have tested positive for the coronavirus, and collect one coarse grain location sample.

But it will not track users and will only collect location data once when users are actively using it.

Data will be stored on university servers and used solely for research purposes.

Once the initial analysis of the collected data has been completed, it will be released to other researchers and could help shed light on disease progression or the further relationship of the respiratory complication with medical history, for example.

Having spoken to doctors, one of the most common things they have noticed about patients with the virus is the way they catch their breath when theyre speaking, as well as a dry cough, and the intervals of their breathing patterns, said Prof Mascolo.

There are very few large datasets of respiratory sounds, so to make better algorithms that could be used for early detection, we need as many samples from as many participants as we can get.

Even if we dont get many positive cases of coronavirus, we could find links with other health conditions.

The study has been approved by the ethics committee of the department of computer science and technology, and is partly funded by the European Research Council through Project EAR.

Professor Pietro Cicuta, from from Cambridges Cavendish Laboratory and a member of the team behind the apps development, said: I am amazed at the speed that we managed to connect across the University to conceive this project, and how Cecilia's team of developers came together to respond to the urgency of the situation.

The app is available as a web app, and versions for Android and iOS will be available soon.

Go here to read the rest:
University of Cambridge researchers develop machine learning app to collect the sounds of Covid-19 - Cambridge Independent

Threat detection and the evolution of AI-powered security solutions – Help Net Security

Ashvin Kamaraju is a true industry leader. As CTO and VP of Engineering, he drives the technology strategy for Thales Cloud Protection & Licensing, leading a researchers and technologists that develop the strategic vision for data protection products and services. In this interview, he discusses automation, artificial intelligence, machine learning and the challenges related to detecting evolving threats.

Discovering an unknown cyber-threat is like trying to find a needle in a haystack. With this enlarged target surface area and a growing number of active hackers, automation and specifically machine learning can be important in aiding this issue through its ability to provide CISOs with the insights they need.

Consequently, it enables an opportunity for CISOs to more effectively deploy their human analysts against potential cyber-attacks and data breaches. However, just because an organization has an automation/AI system in place, this doesnt mean its secure. Countering cyber-threats is a constant game of cat and mouse and hackers always want to get the maximum reward from the minimum effort, tweaking known attack methods as soon as these are detected by the AI. CTOs therefore need to make sure that the AI system is routinely exercised and fed new data and that the algorithms are trained to understand the new data.

The first thing to note is AI should not be confused with machine learning. What most people associate with AI is actually machine learning algorithms with no human level intelligence. AI is based on heuristics whereas machine learning requires a lot of data and algorithms that must be trained to learn the data and provide insights that will help to make decisions.

While the insights provided by AI/machine learning algorithms are very valuable, they are dependent on the data used. If the data has anomalies or is not representative of the entire scope of the problem domains, there will be bias in the insights. These must then be reviewed by an expert team in place to add technical and contextual awareness to the data. AI is here to stay, as data sets become more and more complex, but it will only be effective when added with human intelligence.

AI is beneficial to organizations if it can be used effectively, in addition to human intelligence, not in lieu of. Due to the rapid rise of the amount of data out there, and with the growing number of threat businesses now face, AI and machine learning will play an increasingly important role for those that embrace it.

However, it requires constant investment, not necessarily from a cost perspective, but from a time aspect, as it needs to be kept up-to-date with fresh data to adapt to the changing threat landscape. Organizations need to decide if they have the capabilities to use AI in the right way, or it can soon become an expensive mistake.

Cyber-attacks are getting harder to detect with the evolution of technology to more closely align with how business operates creating new issues. The adoption of mobile phones, tablets, and IoT devices as part of digital transformation strategies is increasing the threat landscape by opening companies up to connect with more people outside their organization.

As the attack surface area expands, and thousands more hackers get in on the action, IT experts are being forced to deal with protecting near-infinite amounts of data and multiple entry points where hackers can get in. Where hacking once took dedication and expertise, with zero-day attacks targeting mostly unknown vulnerabilities, anyone can launch a DDoS attack with hacking toolkits and thousands of tutorials freely available online.

So, to defend themselves going into the future, AI can play a key part. With a new, evolved role in cybersecurity, experts and researchers can leverage AI to identify and counteract sophisticated cyber-attacks with minimal human intervention in the first instance. However, AI will always need that human intelligence to provide the context of the data that it is evaluating and has flagged as potentially malicious.

Any new CISO walking into a large enterprise could be forgiven for potentially feeling daunted at the responsibility for protecting that companys assets. Several questions would spring to mind, from where to start to what to protect. Here are six simple steps to get them started:

1. Know the where and the what of your data Prior to implementing any long-term security strategy, CISOs must first conduct a data sweep. Auditing all data within the perimeter helps identify not only what it has collected, but where theyre holding their most sensitive data. Its impossible to protect data if they dont know where it is.

2. Securing sensitive data is the key Technology such as encryption will provide a key layer of defense for the data, rendering it useless even if its hackers access it. Whether its stored in their own servers, in a public cloud, or a hybrid environment security-minded tools like encryption must be implemented.

3. Protect the data encryption keys Encrypting data creates an encryption key a unique tool used to unlock the data, making it only accessible to those who have access to the key. Safe storage of these keys is crucial and needs to be done offsite to ensure they arent located in the same place as the data, putting both at risk.

4. Forget single-factor authentication The next step is to employ strong multi-factor authentication, ensuring authorized parties can access only the data they need. Two-factor authentication requires an extra layer of information to verify the users password, such as entering a specific code they receive through their smartphone. Since passwords can be hacked easily, two-factor authentication is necessary for a successful security strategy. Multi-factor authentication takes this a step further by requiring additional context such as a device ID, location or IP address.

5. Up-to-date software Vendors are constantly patching their software and hardware to prevent cyber criminals from exploiting bugs and other vulnerabilities that emerge. For many companies, they have relied on software that isnt regularly patched or simply hasnt updated new patches soon enough. Companies must install the most recent patches or risk becoming a victim of hackers.

6. Evaluate and go again After implementing the above, the process must be repeated for all new data that comes into the system. GDPR-led compliance is a continual process and applies to future data as much as it does to what is just entering the system and what is already there. Making a database unattractive to hackers is central to a good cybersecurity strategy. Done correctly, these processes will make data relevant only to those allowed to access it.

Follow this link:
Threat detection and the evolution of AI-powered security solutions - Help Net Security

Two startups find ways to bring AI to the edge – Stacey on IoT

Steve Teig, the CEO of the newly created Perceive Corp. Image courtesy of Perceive.

The market for specialty silicon that enables companies to run artificial intelligence models on battery-sipping and relatively constrained devices is flush with funds and ideas. Two new startups have entered the arena, each proposing a different way to break down the computing-intensive tasks of recognizing wake words, identifying people, and other jobs that are built on neural networks.

Perceive, which launched this week, andKneron(pronounced neuron), which launched in March, are relying on neural networks at the edge to reduce bandwidth, speed up results, and protect privacy. They join a dozen or more startups all trying to bring specialty chips to the edge to make the IoT more efficient and private.

Perceive was spun out of Xperi, a semiconductor company that has built hundreds of AI models to help identify people, objects, wake words, and other popular use cases for edge AI. Two-year-old Perceive has built a 7mm x 7mm chip designed to run neural networks at the edge, but it does so by changing the way the training is done so it can build smaller models that are still accurate.

In general, when a company wants to run neural networks on an edge chip, it must make that model smaller, which can reduce accuracy. Designers also build special sections of the chip that can handle the specific type of math required to run the convolutions used in running a neural network. But Perceive threw all of that out the window, instead turning to information theory to build efficient models.

Information theory is all about finding the signal in a bunch of a noise. When applied to machine learning it is used to ascertain which features are relevant in figuring out if an image is a dog or a cat, or if an individual person is me or my husband. Traditional neural networks are trained by giving a computer tens or hundreds of thousands of images and letting them ascertain which elements are most important when it comes to determining what an object or person is.

Perceives methodology requires less training data, and CEO Steve Teig says that its end models are smaller, which is what allows them to run efficiently on a lower-power chip. The result of the Perceive training is expressed in PyTorch, a common machine learning framework. The company currently offers a chip as well as a service that will help generate custom models. Perceive has also developed hundreds of its own models based on the work done by Xperi.

According to Teig, Perceive has already signed two substantial customers neither of which can be named and is in talks with connected device makers ranging from video doorbells to toy companies.

The other chip startup tackling machine learning is Kneron, formed in 2015. It has built a chip that can reconfigure an element on it specifically for the type of machine learning model it needs to run. When an edge chip has to run a machine learning model it needs to do a lot of math, which has led chipmakers to put special coprocessors on the chip that can handle a type of math known as matrix multiplication. (The Perceive method of training models doesnt require matrix multiplication.)

This flexibility, and the promise it has to enable devices to run local AI, has led Kneron to raise $73 million. Eventually, Kneron hopes to be able to tackle learning at the edge, with CEO Albert Liu promising that the company might be able to offer simplified learning later this year. (Today, all edge AI chips can only match inputs against an existing AI model, as opposed to taking input from the environment and creating a new model.)

Both Perceive and Kneron are riding high on the promise of delivering more intelligence to products that dont need to stay connected to the internet. As privacy, power management, and local control continue to rise in importance, the two companies are joining a host of startups trying to make their hardware the next big thing in silicon.

Related

Original post:
Two startups find ways to bring AI to the edge - Stacey on IoT

Machine Learning in Pharmaceutical Market Business Opportunities and Global Industry Analysis by 2026- Key Players are McKinsey, Boston, IBM Watson -…

The research report on the Machine Learning in Pharmaceutical Market is a deep analysis of the market. This is a latest report, covering the current COVID-19 impact on the market. The pandemic of Coronavirus (COVID-19) has affected every aspect of life globally. This has brought along several changes in market conditions. The rapidly changing market scenario and initial and future assessment of the impact is covered in the report. Experts have studied the historical data and compared it with the changing market situations. The report covers all the necessary information required by new entrants as well as the existing players to gain deeper insight.

Request a Sample Copy of these Reports@ https://www.qyreports.com/request-sample/?report-id=223440

Furthermore, the statistical survey in the report focuses on product specifications, costs, production capacities, marketing channels, and market players. Upstream raw materials, downstream demand analysis, and a list of end-user industries have been studied systematically, along with the suppliers in this market. The product flow and distribution channel have also been presented in this research report.

Key Players:

McKinsey, Boston, IBM Watson, ALTEN Calsoft Labs, Axtria Ingenious Insights, GRAIL, Inc., Aktana, Owkin, Amgen, BASF, Bayer, Lilly, Novartis, Pfizer, Sunovion, and WuXi.

By Regions:

North America (The US, Canada, and Mexico)Europe (the UK, Germany, France, and Rest of Europe)Asia Pacific (China, India, and Rest of Asia Pacific)Latin America (Brazil and Rest of Latin America)Middle East & Africa (Saudi Arabia, the UAE, South Africa, and Rest of Middle East & Africa)

Ask for Discount on this Premium Report@ https://www.qyreports.com/ask-for-discount/?report-id=223440

The Machine Learning in Pharmaceutical Market Report Consists of the Following Points:

Enquiry Before Buying@ https://www.qyreports.com/enquiry-before-buying/?report-id=223440

In conclusion, the Machine Learning in Pharmaceutical Market report is a reliable source for accessing the research data that is projected to exponentially accelerate your business. The report provides information such as economic scenarios, benefits, limits, trends, market growth rate, and figures. SWOT analysis is also incorporated in the report along with speculation attainability investigation and venture return investigation.

About QYReports:

We at QYReports, a leading market research report publisher cater to more than 4,000 prestigious clients worldwide meeting their customized research requirements in terms of market data size and its application. Our list of customers include renowned Chinese companys multinational companies, SMEs and private equity firms. Our business study covers a market size of over 30 industries offering you accurate, in depth and reliable market insight, industry analysis and structure. QYReports specialize in forecasts needed for investing in an and execution of a new project globally and in Chinese markets.

Contact Us:

Name: Jones John

Contact number: +1-510-560-6005

204, Professional Center,

7950 NW 53rd Street, Miami, Florida 33166

sales@qyreports.com

http://www.qyreports.com

Follow this link:
Machine Learning in Pharmaceutical Market Business Opportunities and Global Industry Analysis by 2026- Key Players are McKinsey, Boston, IBM Watson -...

Washington state governor green-lights facial-recog law championed by… guess who: Yep, hometown hero Microsoft – The Register

Roundup Here's your quick-fire summary of recent artificial intelligence news.

DeepMind has built a reinforcement-learning bot capable of playing 57 classic Atari 2600 games about as well as the average human.

Why 57, you may ask? The Atari 2600 console was launched in 1977 and has a library of hundreds of games. In 2012, a group of computer scientists came up with The Arcade Learning Environment (ALE), a toolkit consisting of 57 old Atari games to test reinforcement-learning agents.

AI researchers have been using this collection to benchmark the progress of their game-playing bots ever since. The average score reached on all 57 games has steadily increased with the development of more complex machine-learning systems, but most models have struggled to play the most difficult ones, such as Montezuma's Revenge, Pitfall, Solaris, and Skiing.

Reinforcement learning attempts to teach AI bots how to complete a specific task, such as playing a game, without explicitly telling it the rules. The agents thus have to learn through trial and error, and are guided by rewards. Reaching high scores means more delicious rewards, and over time, the computer learns to make good moves to play the game well.

The researchers have improved their system by employing different types of algorithms and tricks. The bot, dubbed Agent57, is better equipped in dealing with the most difficult games because it's been programmed to be able to explore its environment more efficiently even when the rewards are sparse.

A number of steps have to be executed in the games before a reward is given, so it's not immediately obvious how to play Montezuma's Revenge, Pitfall, Solaris, and Skiing, compared to games like Pong that have a more immediate reward feedback system.

The boffins reckon that mastering games in the ALE dataset is a good sign that a system is more generally intelligent and robust so that they might be applied in the real world.

"The ultimate goal is not to develop systems that excel at games, but rather to use games as a stepping stone for developing systems that learn to excel at a broad set of challenges," Deepmind wrote.

You can read more about the numerous nifty techniques that were used to improve Agent57 in more detail here [PDF].

The governor of the US state of Washington, Jay Inslee, has passed a piece of legislation that regulates the use of facial-recognition systems.

While the likes of San Francisco and Oakland in California, and Somerville in Massachusetts, have banned law enforcement from using facial-recognition technology, Washington has gone for a softer approach. That's not too much of a surprise, considering the bill [PDF] was sponsored by Microsoft, and the US state is the home of the Windows giant. Microsoft is keen for organizations to use its machine-learning services for things like facial and object recognition.

"This legislation represents a significant breakthrough the first time a state or nation has passed a new law devoted exclusively to putting guardrails in place for the use of facial recognition technology," Redmond's president, Brad Smith, said.

Law enforcement agencies in Washington will be allowed to deploy facial-recognition systems, but will have to be more transparent about using it. First, they have to file a "notice of intent", a report that details the service the cops want to use from a particular vendor, and what it's being used for. The document also has to show what kind of data is collected and generated, what decisions the software makes, and where it will be deployed. The notice has to be given to a "legislative authority" that will be made public.

On the vendor side of things, companies will have to provide an application programming interface (API) to enable an independent party to audit the algorithm's performance. They must also report "any complaints or reports of bias regarding the service".

Smith gushed: "Through some of the new law's most important provisions, Washington state has become the first jurisdiction to enact specific facial recognition rules to protect civil liberties and fundamental human rights. While the public will rightly assess ways to improve upon this approach over time, it's worth recognizing at the outset the thorough approach the Washington state legislature has adopted."

Meanwhile, the American Civil Liberties Union has been fighting for a moratorium on facial recognition, demanding a temporary ban on the technology until Congress passes stricter laws that protect an individual's rights.

The Washington law is due to go into effect next year.

Remember Amazon's little AI music-generating keyboard DeepComposer that was touted at its annual re:Invent developer conference last year?

Well, now you can finally play with it. Don't worry if you don't have an actual physical keyboard, Amazon has released a digital version alongside the software needed to create music via machine learning.

DeepComposer trains generative adversarial networks (GANs) to create new jingles based on a particular style of music. The software is designed to help enthusiasts who don't necessarily have a deep knowledge of machine learning or music to learn about GANs in more detail.

It gives step-by-step instructions on how to build, train, and test GANs without having to write any code. Users create a little melody on the digital keyboard and pick the type of genre, and the GAN fills in the blanks, transforming the simple tune into computer generated music. The physical keyboard is available too, but only for the US.

You can find out more about that here.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Read the rest here:
Washington state governor green-lights facial-recog law championed by... guess who: Yep, hometown hero Microsoft - The Register

The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 2020-2030 (forecast period),…

NEW YORK, April 6, 2020 /PRNewswire/ -- Quantum Computing Market Research Report: By Offering (Hardware, Software, Service), Deployment Type (On-Premises, Cloud-Based), Application (Optimization, Simulation and Data Problems, Sampling, Machine Learning), Technology (Quantum Dots, Trapped Ions, Quantum Annealing), Industry (BFSI, Aerospace & Defense, Manufacturing, Healthcare, IT & Telecom, Energy & Utilities) Industry Share, Growth, Drivers, Trends and Demand Forecast to 2030

Read the full report: https://www.reportlinker.com/p05879070/?utm_source=PRN

The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 20202030 (forecast period), to ultimately reach $64,988.3 million by 2030. Machine learning (ML) is expected to progress at the highest CAGR, during the forecast period, among all application categories, owing to the fact that quantum computing is being integrated in ML for improving the latter's use case.

Government support for the development and deployment of the technology is a prominent trend in the quantum computing market, with companies as well as public bodies realizing the importance of a coordinated funding strategy. For instance, the National Quantum Initiative Act, which became a law in December 2018, included a funding of $1.2 billion from the U.S. House of Representatives for the National Quantum Initiative Program. The aim behind the funding was to facilitate the development of technology applications and quantum information science, over a 10-year period, by setting its priorities and goals.

Moreover, efforts are being made to come with standards for the quantum computing technology. Among the numerous standards being developed by the IEEE Standards Association Quantum Computing Working Group are the benchmarks and performance matrix, which would help in analyzing the performance of quantum computers against that of conventional computers. Other noteworthy standards are those related to the nomenclature and definitions, in order to create a common language for quantum computers.

In 2019, the quantum computing market was dominated by the quantum annealing category, on the basis of technology. This is because the physical challenges in its development have been overcome, and it is now being deployed in larger systems. That year, the banking, financial services, and insurance (BFSI) division held the largest share in the market, on account of the rapid expansion of this industry. Additionally, banks and other financial institutions are quickly deploying this technology to make their business process streamlined as well as secure their data.

By 2030, Europe and North America are expected to account for more than 78.0% in the quantum computing market, as Canada, the U.S., the U.K., Germany, and Russia are witnessing heavy investments in the field. For instance, the National Security Agency (NSA), National Aeronautics and Space Administration (NASA), and Los Alamos National Laboratory are engaged in quantum computing technology development. Additionally, an increasing number of collaborations and partnerships are being witnessed in these regions, along with the entry of several startups.

The major players operating in the highly competitive quantum computing market are Telstra Corporation Limited, International Business Machines (IBM) Corporation, Silicon Quantum Computing, IonQ Inc., Alphabet Inc., Huawei Investment & Holding Co. Ltd., Microsoft Corporation, Rigetti & Co. Inc., Zapata Computing Inc., D-Wave Systems Inc., and Intel Corporation. Google LLC, the main operating subsidiary of Alphabet Inc. is establishing the Quantum AI Laboratory, in collaboration with the NSA, wherein the quantum computers developed by D-Wave Systems Inc. are being used.

Read the full report: https://www.reportlinker.com/p05879070/?utm_source=PRN

About Reportlinker ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________ Contact Clare: clare@reportlinker.com US: (339)-368-6001 Intl: +1 339-368-6001

View original content:http://www.prnewswire.com/news-releases/the-quantum-computing-market-valued-507-1-million-in-2019--from-where-it-is-projected-to-grow-at-a-cagr-of-56-0-during-20202030-forecast-period-to-ultimately-reach-64-988-3-million-by-2030--301036177.html

SOURCE Reportlinker

View original post here:
The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 2020-2030 (forecast period),...

DeepMinds AI models transition of glass from a liquid to a solid – VentureBeat

In a paper published in the journal Nature Physics, DeepMind researchers describe an AI system that can predict the movement of glass molecules as they transition between liquid and solid states. The techniques and trained models, which have been made available in open source, could be used to predict other qualities of interest in glass, DeepMind says.

Beyond glass, the researchers assert the work yields insights into general substance and biological transitions, and that it could lead to advances in industries like manufacturing and medicine. Machine learning is well placed to investigate the nature of fundamental problems in a range of fields, a DeepMind spokesperson told VentureBeat. We will apply some of the learnings and techniques proven and developed through modeling glassy dynamics to other central questions in science, with the aim of revealing new things about the world around us.

Glass is produced by cooling a mixture of high-temperature melted sand and minerals. It acts like a solid once cooled past its crystallization point, resisting tension from pulling or stretching. But the molecules structurally resemble that of an amorphous liquid at the microscopic level.

Solving glass physical mysteries motivated an annual conference by the Simons Foundation, which last year hosted a group of 92 researchers from the U.S., Europe, Japan, Brazil, and India in New York. In the three years since the inaugural meeting, theyve managed breakthroughs like supercooled liquid simulation algorithms, but theyve yet to develop a complete description of the glass transition and predictive theory of glass dynamics.

Thats because there are countless unknowns about the nature of the glass formation process, like whether it corresponds to a structural phase transition (akin to water freezing) and why viscosity during cooling increases by a factor of a trillion. Its well-understood that modeling the glass transition is a worthwhile pursuit the physics behind it underlie behavior modeling, drug delivery methods, materials science, and food processing. But the complexities involved make it a hard nut to crack.

Fortunately, there exist structural markers that help identify and classify phase transitions of matter, and glasses are relatively easy to simulate and input into particle-based models. As it happens, glasses can be modeled as particles interacting via a short-range repulsive potential, and this potential is relational (because only pairs of particles interact) and local (because only nearby particles interact with each other).

The DeepMind team leveraged this to train a graph neural network a type of AI model that directly operates on a graph, a non-linear data structure consisting of nodes (vertices) and edges (lines or arcs that connect any two nodes) to predict glassy dynamics. They first created an input graph where the nodes and edges represented particles and interactions between particles, respectively, such that a particle was connected to its neighboring particles within a certain radius. Two encoder models then embedded the labels (i.e., translated them to mathematical objects the AI system could understand). Next, the edge embeddings were iteratively updated, at first based on their previous embeddings and the embeddings of the two nodes to which they were connected.

After all of the graphs edges were updated in parallel using the same model, another model refreshed the nodes based on the sum of their neighboring edge embeddings and their previous embeddings. This process repeated several times to allow local information to propagate through the graph, after which a decoder model extracted mobilities measures of how much a particle typically moves for each particle from the final embeddings of the corresponding node.

The team validated their model by constructing several data sets corresponding to mobilities predictions on different time horizons for different temperatures. After applying graph networks to the simulated 3D glasses, they found that the system strongly outperformed both existing physics-inspired baselines and state-of-the-art AI models.

They say that network was extremely good on short times and remained well matched up to the relaxation time of the glass (which would be up to thousands of years for actual glass), achieving a 96% correlation with the ground truth for short times and a 64% correlation for relaxation time of the glass. In the latter case, thats an improvement of 40% compared with the previous state of the art.

In a separate experiment, to better understand the graph model, the team explored which factors were important to its success. They measured the sensitivity of the prediction for the central particle when another particle was modified, enabling them to judge how large of an area the network used to extract its prediction. This provided an estimate of the distance over which particles influenced each other in the system.

They report theres compelling evidence that growing spatial correlations are present upon approaching the glass transition, and that the network learned to extract them. These findings are consistent with a physical picture where a correlation length grows upon approaching the glass transition, wrote DeepMind in a blog post. The definition and study of correlation lengths is a cornerstone of the study of phase transition in physics.

DeepMind claims the insights gleaned could be useful in predicting the other qualities of glass; as alluded to earlier, the glass transition phenomenon manifests in more than window (silica) glasses. The related jamming transition can be found in ice cream (acolloidal suspension), piles of sand (granular materials), and cell migration during embryonic development, as well as social behaviors such as traffic jams.

Glasses are archetypal of these kinds of complex systems, which operate under constraints where the position of elements inhibits the motion of others. Its believed that a better understanding of them will have implications across many research areas. For instance, imagine a new type of stable yet dissolvable glass structure that could be used for drug delivery and building renewable polymers.

Graph networks may not only help us make better predictions for a range of systems, wrote DeepMind, but indicate what physical correlates are important for modeling them that machine learning systems might be able to eventually assist researchers in deriving fundamental physical theories, ultimately helping to augment, rather than replace, human understanding.

Follow this link:
DeepMinds AI models transition of glass from a liquid to a solid - VentureBeat