This Cryptocurrency Casino Is the Latest Terrible Idea by Atari – CCN.com

Atari has a troubled history. While the company was the driving force behind the early days of the gaming industry, theyve struggled since then.

Theyve recently tried to branch out into various areas. Atari has a hotel chain in the works, which Im sure will definitely still be happening in this pandemic. But their lousiest idea to date has to be their newest venture.

Atari has just launched a cryptocurrency casino. What could possibly go wrong?

Has Atari completely lost their minds? Its one thing to try and push out a modern VCS system. Its quite another thing to try and run hotels and an online casino with no history of doing either of those things.

Atari isnt even producing their new console all that well. Its gone through numerous delays, and they cant even seem to pay the designers properly. The designer of the VCS is suing them.

Is this a company you want to trust with a casino? Let alone with access to your crypto wallets? If mismanagement of the Atari VCS is annoying, imagine how dangerous mismanagement of an online casino could be.

Since the Atari casino isnt available yet, it isnt apparent how many people are going to buy into this idea. I have to assume that the folks who are deeply involved in crypto have a smarter head on their shoulders then to trust a company that has proven itself to be unreliable.

Perhaps the more baffling proposition is investing in Atari Token. This is their own crypto token, which Im sure will only go up in value. Just like their stock, which has only gone up over the past 20 years. (For legal reasons, that was sarcasm)

Maybe Im wrong. Perhaps the Atari Token and Casino will launch, and both be huge successes. I dont plan on being an early adopter, or a late adopter for that matter.

Disclaimer: The opinions expressed in this article do not necessarily reflect the views of CCN.com.

This article was edited by Aaron Weaver.

Last modified: April 4, 2020 11:02 AM UTC

More:
This Cryptocurrency Casino Is the Latest Terrible Idea by Atari - CCN.com

YouTube Scares Off Cryptocurrency Supporters by Banning their Content – Coin Idol

Apr 05, 2020 at 09:30 // News

For the past few years, YouTube has had a policy of banning and even deleting channels promoting, analyzing, or talking about bitcoin and other cryptocurrencies. While the platform itself states they are trying to protect their users against scams, some community members say it much resembles ordinary censorship.

For instance, in an operation dubbed crypto-purge, YouTube has managed to ban a number of videos citing rules breach. Among others, this operation affected a blogger named Chris Dunn who had over 200,000 subscribers. Coinidol.com, a world blockchain outlet, reported that Chris told his subscribers it was the right time for the public to start using a decentralized web, for he actually didnt breach anything, and the ban was nothing but a manifestation of hostility towards the crypto community.

Other channels such as Ivan on Tech and Cryptocurrency News Channel were also not spared, and their owners showed their concerns claiming that their videos were deleted for no good reasons, However, YouTube seemed to be fully satisfied with the warnings it sent to these two channels via the strike notices.

YouTube argues its decision to bring down these channels is reasonable because scammers are continuously taking advantage of the community. For instance, recently a group of fraudsters have started making fake live-stream activities illegally using celebrities names including Elon Musk and Daniel Craig.

Usually, such schemes are used to attract a users attention and make fraudulent schemes look more reliable. As a result, people trust projects that involve famous people and give away their money more easily.

While the above case seems to be a good argument to protect YouTubes policy, their actions towards the other cryptocurrency channels mentioned seem to look like censorship. Some channels simply try to raise awareness of people and educate them on blockchain and cryptocurrencies without promoting any projects or schemes. So, in this case, YouTubes actions might be interpreted as hostility towards the community.

The policy to ban such content has triggered many digital currency maniacs to seek safe haven in decentralized alternatives to YouTube so they can bypass censorship and distribute their content. Bloggers make money off their content, so banning it causes them financial losses. Naturally, no one wants that.

On April 1, a YouTube channel named Bitcoin for Beginners uploaded a video review on some available decentralized alternatives. Among the considerations were the usability, track records as well as the user interface of these alternatives. But the main advantage of such platforms is their decentralized nature. Thus, one can upload ones content without the fear of being banned as no one else can have access to it.

In his video, the author has mentioned the platforms LBRY, Dtube, Bitchute, PeerTube and BitTube as reliable and decentralized alternatives for all those considering to quit YouTube.

Link:
YouTube Scares Off Cryptocurrency Supporters by Banning their Content - Coin Idol

Before coronavirus, there was a ring of suspicion around Tablighis – Business Standard

WikiLeaks memos, based on the interrogation of al-Qaeda operatives detained at Guantanamo Bay, quoted them as saying that they used a New Delhi-based organisation, Tablighi Jamaat, as cover to obtain travel documents and shelter. The leaked memos said at least three detained persons stayed at the organisations facilities in Delhi and around.

The Tablighi Jamaat is in news recently after a congregation at its Nizamuddin Markaz in New Delhi led to a sudden rise in the number of positive coronavirus cases across India. Similar events in several countries in Southeast Asia led to many Covid-19 cases there.

The records revealed by WikiLeaks contain interrogation reports and analysis of 779 inmates of the US military prison in Cuba.

The US records identify the Jamaat Tabligh (as the name appears in the records; JT) as a proselytising organisation that willingly supports terrorists. Also, according to reports, al-Qaeda used the JT to facilitate and fund international travels of its members.

Tablighi Jamaat authorities denied the charge and said that their facilities were open to all. Questioning the authenticity of the WikiLeaks records, it said: It is known that such statements are forced to be made under duress.

What we know

When it started, the Tablighi Jamaat was neither engaged in supporting nor promoting Islamic radicalism. It was, in fact, a reformist organisation. Academics describe it as an apolitical devotional movement stressing individual faith, introspection, and spiritual development. But somewhere along the way, the organisation veered away from its original purpose.

We know that the Tablighi Jamaat was begun by prominent Deobandi cleric and scholar Maulana Muhammad Ilyas Kandhalawi (1885-1944) in 1927 in Mewat, not far from Delhi. Part of Ilyas impetus for founding the Tablighi Jamaat was to counter the inroads being made by Hindu missionaries. They rejected modernity as antithetical to Islam, excluded women, and preached that Islam must subsume other religions. Apart from the Quran, the only literature Tablighis are required to read are the Tablighi Nisab, seven essays penned by a companion of Ilyas in the 1920s.

A lesser-known fact about the Tablighi Jamaat is that its not a monolith: One subsection believes they should pursue jihad through conscience (jihad bin nafs), while a more radical wing advocates jihad through the sword (jihad bin saif), says Alexander R Alexiev, one of the best-known experts on the organisation. Why it captured the Islamic imagination (when it seems, superficially, to be no different from the Wahhabi-Salafi doctrine followed by most Sunnis) seems to have been its austerity, emphasis on conversion, and spirit of service. Saudi Arabia couldve seen the movement as a threat but instead co-opted it, funded it and praised its spirit, advising others to emulate it.

Jamaats growth and development

The real impetus the Tablighi Jamaat got was from ruling families in Pakistan, especially Nawaz Sharif, whose father was a big supporter of the organisation. Its facility at Raiwind, Pakistan, is a well-known recruiting ground for military training after the recruits finish their missionary training.

Alexievs interviews and research on the organisation reveals that the Tablighi Jamaat was instrumental in founding Harkat ul-Mujahideen. Founded at Raiwind in 1980, almost all the Harakat ul-Mujahideens original members were Tablighis. Infamous for the December 1998 hijacking of an Air India passenger jet and the suicide attack on a bus carrying French engineers in Karachi in 2002, Harkat members make no secret of their ties.

Alexiev claims perhaps 80 per cent of the Islamist extremists in France come from Tablighi ranks, prompting French intelligence officers to call Tablighi Jamaat the antechamber of fundamentalism. US counterterrorism officials share this view. We have a significant presence of the Tablighi Jamaat in the United States, the deputy chief of the FBI's international terrorism section said in 2003, and we have found that the al-Qaeda used them for recruiting now and in the past.

Little is known about the stewardship of the organisation, except all its leaders since Ilyas have been related to him by either blood or marriage. Upon Ilyas death in 1944, his son, Maulana Muhammad Yusuf (1917-65), assumed leadership of the movement, expanding its reach and influence. Yusuf and his successor, Inamul Hassan (1965-95), transformed the Jamaat into a truly transnational movement with a renewed emphasis on conversion of non-Muslims, a mission that continues to this day.

Read this article:
Before coronavirus, there was a ring of suspicion around Tablighis - Business Standard

Julian Assange was ‘handcuffed 11 times and stripped naked …

Julian Assange was handcuffed 11 times, stripped naked twice and had his case files confiscated after the first day of his extradition hearing, according to his lawyers, who complained of interference in his ability to take part.

Their appeal to the judge overseeing the trial at Woolwich crown court in south-east London was also supported by legal counsel for the US government, who said it was essential the WikiLeaks founder be given a fair trial.

Edward Fitzgerald QC, acting for Assange, said the case files, which the prisoner was reading in court on Monday, were confiscated by guards when he returned to prison later that night and that he was put in five cells.

The judge, Vanessa Baraitser, replied that she did not have the legal power to comment or rule on Assanges conditions but encouraged the defence team to formally raise the matter with the prison.

The details emerged on the second day of Assanges extradition hearing, during which his legal team denied that he had knowingly placed lives at risk by publishing unredacted US government files.

The court was told Wikileaks had entered into a collaboration with the Guardian, El Pas, the New York Times and other media outlets to make redactions to 250,000 leaked cables in 2010 and published them.

Mark Summers, QC, claimed the unredacted files had been published because a password to this material had appeared in a Guardian book on the affair. The gates got opened not by Assange or WikiLeaks but by another member of that partnership, he said.

The Guardian denied the claim.

The Guardian has made clear it is opposed to the extradition of Julian Assange. However, it is entirely wrong to say the Guardians 2011 Wikileaks book led to the publication of unredacted US government files, a spokesman said.

The book contained a password which the authors had been told by Julian Assange was temporary and would expire and be deleted in a matter of hours. The book also contained no details about the whereabouts of the files. No concerns were expressed by Assange or Wikileaks about security being compromised when the book was published in February 2011. Wikileaks published the unredacted files in September 2011.

The Guardians former investigations editor David Leigh, who wrote the book with Luke Harding, said: Its a complete invention that I had anything to do with Julian Assanges own publication decisions. His cause is not helped by people making things up.

Assange, 48, is wanted in the US to face 18 charges of attempted hacking and breaches of the Espionage Act. They relate to the publication a decade ago of hundreds of thousands of diplomatic cables and files covering areas including US activities in Afghanistan and Iraq.

The Australian, who could face a 175-year prison sentence if found guilty, is accused of working with the former US army intelligence analyst Chelsea Manning to leak classified documents.

As well as rejecting allegations that Assange had put the lives of US sources in danger, much of the hearing was taken up with defence counter arguments to the US case that he helped the former intelligence analyst Chelsea Manning to crack a scrambled password stored on US Department of Defense computers in order to continue sending leaked material to Wikileaks.

You can accurately describe this chapter of the case as lies, lies and more lies, Summers told the court at the outset of the day.

Manning already had access to the information and did not need to decode the scrambled password, or hash value. Nor could she have done so, as is alleged, in order to gain someone elses password, because access to the system was recorded on the basis of IP addresses, Summers says.

As for the US contention that Assange had solicited leaks from Manning, a whistleblower who served more than six years of a 35-year military prison sentence before it was commuted by Barack Obama, Summers drew on Mannings insistence that she was moved by her conscience.

James Lewis QC responded for the US government by accusing the defence of consistently misrepresenting the US indictment of Assange, adding: What he [Summers] is trying to do is consistently put up a straw man and then knock it down.

For example, on the question of cracking the password hash, he emphasised that the US was making a general allegation that doing so would make it more difficult for the authorities to identify the source of the leaks.

Lewis rejected claims made on Monday by the defence that the US had deliberately ratcheted up the charges against Assange in response to the fact that Swedish authorities announced in May 2019 their intention to reopen the investigation of Assange for alleged sexual offences and issue a European arrest warrant.

The inference that charging Mr Assange with publishing the names of sources was simply ratcheting up the charges is defeated by the objective facts that the [US] grand jury found and indicted him on, he said.

It just does not follow we will ratchet up the charges in case there might be a competition. We have a clear unequivocal and legal basis for charging him and that is the end of it.

The hearing continues.

Visit link:
Julian Assange was 'handcuffed 11 times and stripped naked ...

UK govt won’t release Assange amid virus – The Canberra Times

news, latest-news

WikiLeaks founder Julian Assange isn't eligible to be temporarily released from jail as part of the UK government's plan to mitigate coronavirus in prisons. There are now 88 prisoners and 15 staff who have tested positive for COVID-19 in the country and more than a quarter of prison staff are absent or self-isolating due to the pandemic. Justice Secretary Robert Buckland has announced that selected low-risk offenders, who are within weeks of their release dates, will be GPS-tagged and temporarily freed to ease pressure on the National Health Service. "This government is committed to ensuring that justice is served to those who break the law," he said in a statement on Saturday. "But this is an unprecedented situation because if coronavirus takes hold in our prisons, the NHS could be overwhelmed and more lives put at risk." The Ministry of Justice confirmed with AAP that Julian Assange, who is being held on remand in Belmarsh prison, will not be temporarily released because he's not serving a custodial sentence and therefore not eligible. The government is also working to expedite sentencing hearings for those on remand to reduce crowding in jails, but the Australian won't be affected by that measure either. The WikiLeaks founder is only one week into his four-week US extradition hearing and at this stage it's uncertain whether it will resume as planned at Woolwich Crown Court on May 18. Assange's next procedural hearing is set for the Westminster Magistrates Court on Tuesday. He applied for bail last week with his lawyers citing concerns about the risk of coronavirus but he was knocked back by District Judge Vanessa Baraister. In her ruling, she said the Australian had skipped bail in the past and taken refuge in the Ecuadorian embassy in London for almost seven years making him a flight risk. The US government is trying to extradite Assange to face 17 charges of violating the Espionage Act and one of conspiring to commit computer intrusion over the leaking and publishing of thousands of classified US diplomatic and military files. Some of those files revealed alleged US war crimes in Iraq and Afghanistan. The US charges carry a total sentence of 175 years' imprisonment. Australian Associated Press

https://nnimgt-a.akamaihd.net/transform/v1/crop/frm/silverstone-feed-data/87a0b655-86bb-42ad-a2f4-2da5076d1c27.jpg/r0_74_800_526_w1200_h678_fmax.jpg

WikiLeaks founder Julian Assange isn't eligible to be temporarily released from jail as part of the UK government's plan to mitigate coronavirus in prisons.

There are now 88 prisoners and 15 staff who have tested positive for COVID-19 in the country and more than a quarter of prison staff are absent or self-isolating due to the pandemic.

Justice Secretary Robert Buckland has announced that selected low-risk offenders, who are within weeks of their release dates, will be GPS-tagged and temporarily freed to ease pressure on the National Health Service.

"This government is committed to ensuring that justice is served to those who break the law," he said in a statement on Saturday.

"But this is an unprecedented situation because if coronavirus takes hold in our prisons, the NHS could be overwhelmed and more lives put at risk."

The Ministry of Justice confirmed with AAP that Julian Assange, who is being held on remand in Belmarsh prison, will not be temporarily released because he's not serving a custodial sentence and therefore not eligible.

The government is also working to expedite sentencing hearings for those on remand to reduce crowding in jails, but the Australian won't be affected by that measure either.

The WikiLeaks founder is only one week into his four-week US extradition hearing and at this stage it's uncertain whether it will resume as planned at Woolwich Crown Court on May 18.

Assange's next procedural hearing is set for the Westminster Magistrates Court on Tuesday.

He applied for bail last week with his lawyers citing concerns about the risk of coronavirus but he was knocked back by District Judge Vanessa Baraister.

In her ruling, she said the Australian had skipped bail in the past and taken refuge in the Ecuadorian embassy in London for almost seven years making him a flight risk.

The US government is trying to extradite Assange to face 17 charges of violating the Espionage Act and one of conspiring to commit computer intrusion over the leaking and publishing of thousands of classified US diplomatic and military files.

Some of those files revealed alleged US war crimes in Iraq and Afghanistan.

The US charges carry a total sentence of 175 years' imprisonment.

Australian Associated Press

Go here to see the original:
UK govt won't release Assange amid virus - The Canberra Times

Self-supervised learning is the future of AI – The Next Web

Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

Read the rest here:
Self-supervised learning is the future of AI - The Next Web

University of Cambridge researchers develop machine learning app to collect the sounds of Covid-19 – Cambridge Independent

University of Cambridge researchers have developed a machine learning app to collect the sounds of Covid-19

Researchers at the University of Cambridge have developed an app that will collect the sounds of Covid-19.

The Covid-19 Sounds App will be used to gain data to develop machine learning algorithms that could automatically detect whether a person is suffering from the disease.

It would be based on the sound of their voice, their breathing and coughing.

Theres still so much we dont know about this virus and the illness it causes, and in a pandemic situation like the one were currently in, the more reliable information you can get, the better, said Professor Cecilia Mascolo from Cambridges department of computer science and technology, who led the development of the app.

Being a respiratory condition, the sounds made by people with the condition, including voice, breathing and cough sounds, are very specific.

A large, crowdsourced data set will be useful in developing machine learning algorithms that could be used for automatic detection of the condition.

The app collects basic demographic and medical information from users, as well as spoken voice samples, breathing and coaching samples through the phones microphone.

It will ask users if they have tested positive for the coronavirus, and collect one coarse grain location sample.

But it will not track users and will only collect location data once when users are actively using it.

Data will be stored on university servers and used solely for research purposes.

Once the initial analysis of the collected data has been completed, it will be released to other researchers and could help shed light on disease progression or the further relationship of the respiratory complication with medical history, for example.

Having spoken to doctors, one of the most common things they have noticed about patients with the virus is the way they catch their breath when theyre speaking, as well as a dry cough, and the intervals of their breathing patterns, said Prof Mascolo.

There are very few large datasets of respiratory sounds, so to make better algorithms that could be used for early detection, we need as many samples from as many participants as we can get.

Even if we dont get many positive cases of coronavirus, we could find links with other health conditions.

The study has been approved by the ethics committee of the department of computer science and technology, and is partly funded by the European Research Council through Project EAR.

Professor Pietro Cicuta, from from Cambridges Cavendish Laboratory and a member of the team behind the apps development, said: I am amazed at the speed that we managed to connect across the University to conceive this project, and how Cecilia's team of developers came together to respond to the urgency of the situation.

The app is available as a web app, and versions for Android and iOS will be available soon.

Go here to read the rest:
University of Cambridge researchers develop machine learning app to collect the sounds of Covid-19 - Cambridge Independent

Threat detection and the evolution of AI-powered security solutions – Help Net Security

Ashvin Kamaraju is a true industry leader. As CTO and VP of Engineering, he drives the technology strategy for Thales Cloud Protection & Licensing, leading a researchers and technologists that develop the strategic vision for data protection products and services. In this interview, he discusses automation, artificial intelligence, machine learning and the challenges related to detecting evolving threats.

Discovering an unknown cyber-threat is like trying to find a needle in a haystack. With this enlarged target surface area and a growing number of active hackers, automation and specifically machine learning can be important in aiding this issue through its ability to provide CISOs with the insights they need.

Consequently, it enables an opportunity for CISOs to more effectively deploy their human analysts against potential cyber-attacks and data breaches. However, just because an organization has an automation/AI system in place, this doesnt mean its secure. Countering cyber-threats is a constant game of cat and mouse and hackers always want to get the maximum reward from the minimum effort, tweaking known attack methods as soon as these are detected by the AI. CTOs therefore need to make sure that the AI system is routinely exercised and fed new data and that the algorithms are trained to understand the new data.

The first thing to note is AI should not be confused with machine learning. What most people associate with AI is actually machine learning algorithms with no human level intelligence. AI is based on heuristics whereas machine learning requires a lot of data and algorithms that must be trained to learn the data and provide insights that will help to make decisions.

While the insights provided by AI/machine learning algorithms are very valuable, they are dependent on the data used. If the data has anomalies or is not representative of the entire scope of the problem domains, there will be bias in the insights. These must then be reviewed by an expert team in place to add technical and contextual awareness to the data. AI is here to stay, as data sets become more and more complex, but it will only be effective when added with human intelligence.

AI is beneficial to organizations if it can be used effectively, in addition to human intelligence, not in lieu of. Due to the rapid rise of the amount of data out there, and with the growing number of threat businesses now face, AI and machine learning will play an increasingly important role for those that embrace it.

However, it requires constant investment, not necessarily from a cost perspective, but from a time aspect, as it needs to be kept up-to-date with fresh data to adapt to the changing threat landscape. Organizations need to decide if they have the capabilities to use AI in the right way, or it can soon become an expensive mistake.

Cyber-attacks are getting harder to detect with the evolution of technology to more closely align with how business operates creating new issues. The adoption of mobile phones, tablets, and IoT devices as part of digital transformation strategies is increasing the threat landscape by opening companies up to connect with more people outside their organization.

As the attack surface area expands, and thousands more hackers get in on the action, IT experts are being forced to deal with protecting near-infinite amounts of data and multiple entry points where hackers can get in. Where hacking once took dedication and expertise, with zero-day attacks targeting mostly unknown vulnerabilities, anyone can launch a DDoS attack with hacking toolkits and thousands of tutorials freely available online.

So, to defend themselves going into the future, AI can play a key part. With a new, evolved role in cybersecurity, experts and researchers can leverage AI to identify and counteract sophisticated cyber-attacks with minimal human intervention in the first instance. However, AI will always need that human intelligence to provide the context of the data that it is evaluating and has flagged as potentially malicious.

Any new CISO walking into a large enterprise could be forgiven for potentially feeling daunted at the responsibility for protecting that companys assets. Several questions would spring to mind, from where to start to what to protect. Here are six simple steps to get them started:

1. Know the where and the what of your data Prior to implementing any long-term security strategy, CISOs must first conduct a data sweep. Auditing all data within the perimeter helps identify not only what it has collected, but where theyre holding their most sensitive data. Its impossible to protect data if they dont know where it is.

2. Securing sensitive data is the key Technology such as encryption will provide a key layer of defense for the data, rendering it useless even if its hackers access it. Whether its stored in their own servers, in a public cloud, or a hybrid environment security-minded tools like encryption must be implemented.

3. Protect the data encryption keys Encrypting data creates an encryption key a unique tool used to unlock the data, making it only accessible to those who have access to the key. Safe storage of these keys is crucial and needs to be done offsite to ensure they arent located in the same place as the data, putting both at risk.

4. Forget single-factor authentication The next step is to employ strong multi-factor authentication, ensuring authorized parties can access only the data they need. Two-factor authentication requires an extra layer of information to verify the users password, such as entering a specific code they receive through their smartphone. Since passwords can be hacked easily, two-factor authentication is necessary for a successful security strategy. Multi-factor authentication takes this a step further by requiring additional context such as a device ID, location or IP address.

5. Up-to-date software Vendors are constantly patching their software and hardware to prevent cyber criminals from exploiting bugs and other vulnerabilities that emerge. For many companies, they have relied on software that isnt regularly patched or simply hasnt updated new patches soon enough. Companies must install the most recent patches or risk becoming a victim of hackers.

6. Evaluate and go again After implementing the above, the process must be repeated for all new data that comes into the system. GDPR-led compliance is a continual process and applies to future data as much as it does to what is just entering the system and what is already there. Making a database unattractive to hackers is central to a good cybersecurity strategy. Done correctly, these processes will make data relevant only to those allowed to access it.

Follow this link:
Threat detection and the evolution of AI-powered security solutions - Help Net Security

Two startups find ways to bring AI to the edge – Stacey on IoT

Steve Teig, the CEO of the newly created Perceive Corp. Image courtesy of Perceive.

The market for specialty silicon that enables companies to run artificial intelligence models on battery-sipping and relatively constrained devices is flush with funds and ideas. Two new startups have entered the arena, each proposing a different way to break down the computing-intensive tasks of recognizing wake words, identifying people, and other jobs that are built on neural networks.

Perceive, which launched this week, andKneron(pronounced neuron), which launched in March, are relying on neural networks at the edge to reduce bandwidth, speed up results, and protect privacy. They join a dozen or more startups all trying to bring specialty chips to the edge to make the IoT more efficient and private.

Perceive was spun out of Xperi, a semiconductor company that has built hundreds of AI models to help identify people, objects, wake words, and other popular use cases for edge AI. Two-year-old Perceive has built a 7mm x 7mm chip designed to run neural networks at the edge, but it does so by changing the way the training is done so it can build smaller models that are still accurate.

In general, when a company wants to run neural networks on an edge chip, it must make that model smaller, which can reduce accuracy. Designers also build special sections of the chip that can handle the specific type of math required to run the convolutions used in running a neural network. But Perceive threw all of that out the window, instead turning to information theory to build efficient models.

Information theory is all about finding the signal in a bunch of a noise. When applied to machine learning it is used to ascertain which features are relevant in figuring out if an image is a dog or a cat, or if an individual person is me or my husband. Traditional neural networks are trained by giving a computer tens or hundreds of thousands of images and letting them ascertain which elements are most important when it comes to determining what an object or person is.

Perceives methodology requires less training data, and CEO Steve Teig says that its end models are smaller, which is what allows them to run efficiently on a lower-power chip. The result of the Perceive training is expressed in PyTorch, a common machine learning framework. The company currently offers a chip as well as a service that will help generate custom models. Perceive has also developed hundreds of its own models based on the work done by Xperi.

According to Teig, Perceive has already signed two substantial customers neither of which can be named and is in talks with connected device makers ranging from video doorbells to toy companies.

The other chip startup tackling machine learning is Kneron, formed in 2015. It has built a chip that can reconfigure an element on it specifically for the type of machine learning model it needs to run. When an edge chip has to run a machine learning model it needs to do a lot of math, which has led chipmakers to put special coprocessors on the chip that can handle a type of math known as matrix multiplication. (The Perceive method of training models doesnt require matrix multiplication.)

This flexibility, and the promise it has to enable devices to run local AI, has led Kneron to raise $73 million. Eventually, Kneron hopes to be able to tackle learning at the edge, with CEO Albert Liu promising that the company might be able to offer simplified learning later this year. (Today, all edge AI chips can only match inputs against an existing AI model, as opposed to taking input from the environment and creating a new model.)

Both Perceive and Kneron are riding high on the promise of delivering more intelligence to products that dont need to stay connected to the internet. As privacy, power management, and local control continue to rise in importance, the two companies are joining a host of startups trying to make their hardware the next big thing in silicon.

Related

Original post:
Two startups find ways to bring AI to the edge - Stacey on IoT

Machine Learning in Pharmaceutical Market Business Opportunities and Global Industry Analysis by 2026- Key Players are McKinsey, Boston, IBM Watson -…

The research report on the Machine Learning in Pharmaceutical Market is a deep analysis of the market. This is a latest report, covering the current COVID-19 impact on the market. The pandemic of Coronavirus (COVID-19) has affected every aspect of life globally. This has brought along several changes in market conditions. The rapidly changing market scenario and initial and future assessment of the impact is covered in the report. Experts have studied the historical data and compared it with the changing market situations. The report covers all the necessary information required by new entrants as well as the existing players to gain deeper insight.

Request a Sample Copy of these Reports@ https://www.qyreports.com/request-sample/?report-id=223440

Furthermore, the statistical survey in the report focuses on product specifications, costs, production capacities, marketing channels, and market players. Upstream raw materials, downstream demand analysis, and a list of end-user industries have been studied systematically, along with the suppliers in this market. The product flow and distribution channel have also been presented in this research report.

Key Players:

McKinsey, Boston, IBM Watson, ALTEN Calsoft Labs, Axtria Ingenious Insights, GRAIL, Inc., Aktana, Owkin, Amgen, BASF, Bayer, Lilly, Novartis, Pfizer, Sunovion, and WuXi.

By Regions:

North America (The US, Canada, and Mexico)Europe (the UK, Germany, France, and Rest of Europe)Asia Pacific (China, India, and Rest of Asia Pacific)Latin America (Brazil and Rest of Latin America)Middle East & Africa (Saudi Arabia, the UAE, South Africa, and Rest of Middle East & Africa)

Ask for Discount on this Premium Report@ https://www.qyreports.com/ask-for-discount/?report-id=223440

The Machine Learning in Pharmaceutical Market Report Consists of the Following Points:

Enquiry Before Buying@ https://www.qyreports.com/enquiry-before-buying/?report-id=223440

In conclusion, the Machine Learning in Pharmaceutical Market report is a reliable source for accessing the research data that is projected to exponentially accelerate your business. The report provides information such as economic scenarios, benefits, limits, trends, market growth rate, and figures. SWOT analysis is also incorporated in the report along with speculation attainability investigation and venture return investigation.

About QYReports:

We at QYReports, a leading market research report publisher cater to more than 4,000 prestigious clients worldwide meeting their customized research requirements in terms of market data size and its application. Our list of customers include renowned Chinese companys multinational companies, SMEs and private equity firms. Our business study covers a market size of over 30 industries offering you accurate, in depth and reliable market insight, industry analysis and structure. QYReports specialize in forecasts needed for investing in an and execution of a new project globally and in Chinese markets.

Contact Us:

Name: Jones John

Contact number: +1-510-560-6005

204, Professional Center,

7950 NW 53rd Street, Miami, Florida 33166

sales@qyreports.com

http://www.qyreports.com

Follow this link:
Machine Learning in Pharmaceutical Market Business Opportunities and Global Industry Analysis by 2026- Key Players are McKinsey, Boston, IBM Watson -...