The story behind that little padlock in your browser – Horizon magazine

Behind that little padlock is cryptographic code that guarantees the security of data passing between you and, for example, the website you are looking at.

In fact, TLS guarantees security on three fronts: authentication, encryption and integrity. Authentication, so that your data goes where you think it is going; encryption, so that it does not go anywhere else; and integrity, so that it is not tampered with en route.

Its the most popular security protocol on the internet, securing essentially every e-commerce transaction, Eric Rescorla, chief technology officer at US technology company Mozilla, told Horizon over email.

In the two decades leading up to 2018, there were five overhauls of TLS to keep pace with the sophistication of online attacks. After that, many experts believed that the latest incarnation, TLS1.2, was safe enough for the foreseeable future,until researchers such as Dr Karthikeyan Bhargavan and his colleagues at the French National Institute for Research in Digital Science and Technology (INRIA) in Paris came along.

Scaffold

As part of a project called CRYSP, the researchers had been working on ways to improve the security of software applications. Usually, software developers rely on TLS like a builder relies on a scaffold in other words, they take its safety for granted.

To improve security at the software level, however, Dr Bhargavan and colleagues had to thoroughly check that the underlying assumptions about TLS1.2 that it had no serious flaws were justified.

At some point, we realised they werent, he said.

After discovering some shaky lines of code, the researchers worked with Microsoft Research and took on the role of hackers, performing some simulated attacks on the protocol to test the extent of its vulnerability. The attacks revealed that it was possible to be a man in the middle between an internet user and a service provider, such as Google, and thereby steal that users data.

It would have to be a fairly complex sequence of actions, explained Dr Bhargavan. Typically, the person in the middle would have to send weird messages to each actor to lure them into a buggy part of the code.

If, as the person in the middle, I was successful, I could potentially steal someones payment details, he continued. Or I could pretend to be Apple or Google, and download (insert) malware via a software update to get access to peoples computers.

Serious threat

Such a hacker would need great expertise and computational power, that of a government agency, for example, as well as access to some of the physical infrastructure close to the key actors. Nevertheless, the Internet Engineering Task Force (IETF), an international organisation promoting internet standards, judged the threat to be sufficiently serious to warrant a new version of the cryptographic protocol.

Dr Bhargavan points out that he was far from the only computer scientist to prompt the revision. There were four or five other research groups unearthing problems with the current protocol, pushing one another along, he says, in a healthy rivalry.

Still, he says that his group discovered some of the most surprising flaws in TLS1.2, which he believes may have been the final nails in the coffin for the protocol.

His group was also part of a broad collaboration within the internet community, overseen by an IETF working group, to construct the more secure, and man-in-the-middle-proof successor that is TLS 1.3, using modern algorithms and techniques. Dr Bhargavan was a key player in that effort, said Rescorla who oversaw TLS at the IETF at the time of the work.

TLS 1.3 was officially launched in August 2018. Since then it has been implemented by major internet browsers such as Mozilla Firefox and Google Chrome.

So long as you click that padlock you have some confidence about safety.

Dr Karthikeyan Bhargavan, INRIA, France

So how much safer are internet users as a result?

Human error

It is true that for most online security breaches, TLS is not to blame. Usually, personal data gets into the wrong hands because of bugs in software what Dr Bhargavans group was working on to begin with or human error.

But Dr Bhargavan believes there is reassurance in knowing that the underlying protocol is secure. Its not everything, but so long as you click that padlock you have some confidence about safety its the most basic thing, he said.

Besides, internet users are not only worried about hackers. Since 2013, and the leaks of Edward Snowden, a former employee of a US National Security Agency contractor, many people are concerned about the amount of personal data amassed by state intelligence and large enterprises.

Designed with the Snowden revelations in mind, TLS 1.3 closes the door to some types of this pervasive network-based monitoring through its encryption of both user data and metadata. It also prevents retrospective decryption one of the previous versions weaknesses.

There was a long discussion in the IETF working group about whether preventing surveillance was one of the goals of TLS, says Dr Bhargavan. And the answer was ultimately in the positive, he said.

Now Dr Bhargavan is returning to the issue of software security. He believes the majority of remaining vulnerabilities can be eliminated at the design stage.

Verified

To do this, he and his colleagues are constructing a library, HACL*, of fully verified cryptographic code, which other developers can draw on when building new software. In this project, known as CIRCUS, they are also creating an easy-to-follow reference paradigm that tells developers how to put software together without introducing security glitches.

The resultant high-assurance software has already been taken up by developers at Mozilla and Microsoft, among others. We want everyone to be following these techniques, Dr Bhargavan said.

Ultimately, his goal is not to secure everything online, but to find the safest spots within our highly complex computer systems. I dont think we will ever get to a point where everything is verified, he said, but we can find the most secure basket in which we can put our keys and passwords and financial data.

The research in this article was funded by the European Research Council. Dr Bhargavan is a recipient of a 2019 Horizon Impact Award for societal impact across Europe and beyond.

If you liked this article, please consider sharing it on social media.

Continued here:
The story behind that little padlock in your browser - Horizon magazine

Classic Review: The Prescient Spy Antics of Sneakers – Fordham Ram

Erica Weidner, Copy ChiefMarch 25, 2020

I watched Sneakers with my parents when I was about 11 or 12 years old. I absolutely loved it: the action sequences, the humorous tone, the clever tricks. The movie, which came out in 1992, immediately shot to the top of my favorites list. Although Sneakers has been shaken from the number one spot, its remained on my top ten list ever since.

This spring break, I decided to rewatch Sneakers to answer the question: Does my childhood favorite movie stand the test of time?

The answer: Yes, it absolutely does.

Sneakers opens in a flashback to 1969. Two college students, Martin Brice (Robert Redford) and Cosmo (Ben Kingsley), are using their hacking skills to play pranks. Despite their humorous attitude, these pranks such as transferring every cent in Richard Nixons bank account to the National Association to Legalize Marijuana are extremely illegal. The police catch Cosmo, but Martin avoids the cops by sheer dumb luck.

Fast-forward to the present day (1992, that is). Martin Brice has rebranded himself as Martin Bishop. Hes built a new life for himself as a security specialist: someone who breaks into banks so that those banks can improve their defenses. Martin leads an all-star team of hackers and sneakers.

Crease (Sidney Poitier) is a former CIA agent whos never quite lost his sense of paranoia or his sense of national duty. Creases foil is Mother (Dan Aykroyd) who will ramble about conspiracy theories to anyone who will listen. Theres also Carl (River Phoenix), a young hacker more interested in girls than in money, and Liz (Mary McDonnell), an ex-girlfriend of Martin who finds herself back in the madness. Rounding out the team is Whistler (David Strathairn). Whistler is blind, but his keen sense of hearing makes him invaluable to any mission.

However, no matter how well this team does, Martins past still comes back to haunt him. His college crimes become blackmail material, and the team gets embroiled in a conspiracy of epic proportions, involving the Russians, the NSA and other spooks.

The movie came out almost 30 years ago, and I still cant bring myself to spoil it for you. Like every spy movie, Sneakers has its fair share of twists and crosses, and like every heist movie, it has its own uniquely convoluted plans. Itd be wrong to ruin those moments, even if were well after the two-week spoiler grace period.

Calling Sneakers a simple spy movie or a heist movie seems like an injustice. Its wildly funny, which is both due to its writing and its cast. The wisecracks that the characters bounce off one another are not only wittily worded but beautifully executed. Some of the funniest moments in this movie focus on the tension between Mother, who believes every conspiracy theory, and Crease, who believes none of them. Aykroyds Mother is lovably petulant in sharing his conspiracies, while Poitiers Crease is amusingly annoyed by his antics; together, they are on fire. Comedy also adds in some more subtle touches, such as Whistler reading a Playboy magazine written in Braille.

Sneakers also has its moments of clarity, where a fun, silly spy movie hits on a deep vein of truth. In a way, the film seems to be ahead of its time. For instance, throughout the movie, theres a direct implication that the NSA is spying on American citizens. Everyone on the sneakers team believes this, and even the NSA agents dont deny it. Edward Snowden confirmed this implication just over 20 years later.

This isnt the only time that the movie looks toward the future. Near the end of Sneakers, the films ultimate antagonist tries to explain how technology has transformed the way we think of the world. As he puts it, Its not about whos got the most bullets. Its about who controls the information. These words rang true when the movie premiered, but today they seem more relevant than ever. Our world is run by data.

Sure, there are times that Sneakers shows its age. Characters laugh at the concept of meeting a date online, which they call compu-dating and in 1992, that concept was laughable. (Today, its one of the most popular ways that couples meet.) The campy soundtrack, characterized by melodramatic piano music, also dates this movie in the 90s.

However, these are small complaints that shouldnt negate how well Sneakers stands the test of time. The movie hasnt lost its excitement or its sense of humor its just as fun to watch now as it was in the past. Amazingly, it also hasnt lost its relevance. For better or for worse, hackers and spies are just as important in 2020 as they were in 1992. The technology they use may have changed, but the reasons have not.

Sneakers holds up, and its cemented its place on my favorites list.

Read the original:
Classic Review: The Prescient Spy Antics of Sneakers - Fordham Ram

Rakuten Joins the Open Invention Network Community – GlobeNewswire

Durham, N.C., March 25, 2020 (GLOBE NEWSWIRE) -- Open Invention Network (OIN), the largest patent non-aggression community in history, announced today that Rakuten, Inc. has joined as a community member. Rakuten is a global leader in internet services, offering over 70 services in e-commerce, fintech, digital content and communications. In addition, Rakuten is pursuing an ambitious new network build out to become the worlds first end-to-end fully virtualized, cloud-native mobile network, using open source mobile carrier architecture to drive its $600 billion investment. By joining OIN, Rakuten is demonstrating its commitment to open source software (OSS) as a foundation for its platforms.

The online commerce, mobile communications, and fintech services industries are experiencing rapid growth. Global leaders that recognize these market opportunities, and the benefits of shared innovation inherent in open source, are building robust, feature-rich services that help make them more desirable to consumers, said Keith Bergelt, CEO of Open Invention Network. We are pleased that Rakuten has joined our community and committed to patent non-aggression in Linux and adjacent open source technologies.

At Rakuten, our businesses continue to evolve as we address new market opportunities. Because of this, we are a user and strong advocate of open source software, said Tareq Amin, CAO, Group Executive Vice President, Rakuten, Inc. We are building the first 100% fully virtualized mobile network, enabling us to scale rapidly and offer the best quality-of-service (QoS) available. By joining Open Invention Network, we are demonstrating our commitment to open source software, and supporting it with a pledge of patent non-aggression.

OINs community practices patent non-aggression in core Linux and adjacent open source technologies by cross-licensing Linux System patents to one another on a royalty-free basis. Patents owned by Open Invention Network are similarly licensed royalty-free to any organization that agrees not to assert its patents against the Linux System. The OIN license can be signed online at http://www.j-oin.net/.

About RakutenRakuten, Inc. (TSE: 4755) is a global leader in Internet services that empower individuals, communities, businesses and society. Founded in Tokyo in 1997 as an online marketplace, Rakuten has expanded to offer services in e-commerce, fintech, digital content and communications to about 1.4 billion members around the world. The Rakuten Group has over 20,000 employees, and operations in 30 countries and regions. For more information visithttps://global.rakuten.com/corp/.

About Open Invention NetworkOpen Invention Network (OIN) is the largest patent non-aggression community in history and supports freedom of action in Linux as a key element of open source software (OSS). Patent non-aggression in core technologies is a cultural norm within OSS, so that the litmus test for authentic behavior in the OSS community includes OIN membership. Funded by Google, IBM, NEC, Philips, Sony, SUSE and Toyota, OIN has more than 3,200 community members and owns more than 1,300 global patents and applications. The OIN patent license and member cross-licenses are available royalty-free to any party that joins the OIN community.

For more information, visit http://www.openinventionnetwork.com.

Media-Only Contact:Ed SchauwekerAVID Public Relations for Open Invention Networked@avidpr.com +1 (703) 963-5238

Continued here:
Rakuten Joins the Open Invention Network Community - GlobeNewswire

Will COVID-19 Create a Big Moment for AI and Machine Learning? – Dice Insights

COVID-19 will change how the majority of us live and work, at least in the short term. Its also creating a challenge for tech companies such as Facebook, Twitter and Google that ordinarily rely on lots and lots of human labor to moderate content. Are A.I. and machine learning advanced enough to help these firms handle the disruption?

First, its worth noting that, although Facebook has instituted a sweeping work-from-home policy in order to protect its workers (along with Googleand a rising number of other firms), it initially required its contractors who moderate content to continue to come into the office. That situation only changed after protests,according toThe Intercept.

Now, Facebook is paying those contractors while they sit at home, since the nature of their work (scanning peoples posts for content that violates Facebooks terms of service) is extremely privacy-sensitive. Heres Facebooks statement:

For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons. We have taken precautions to protect our workers by cutting down the number of people in any given office, implementing recommended work from home globally, physically spreading people out at any given office and doing additional cleaning. Given the rapidly evolving public health concerns, we are taking additional steps to protect our teams and will be working with our partners over the course of this week to send all contract workers who perform content review home, until further notice. Well ensure that all workers are paid during this time.

Facebook, Twitter, Reddit, and other companies are in the same proverbial boat: Theres an increasing need to police their respective platforms, if only to eliminate fake news about COVID-19, but the workers who handle such tasks cant necessarily do so from home, especially on their personal laptops. The potential solution? Artificial intelligence (A.I.) and machine-learning algorithms meant to scan questionable content and make a decision about whether to eliminate it.

HeresGoogles statement on the matter, via its YouTube Creator Blog.

Our Community Guidelines enforcement today is based on a combination of people and technology: Machine learning helps detect potentially harmful content and then sends it to human reviewers for assessment. As a result of the new measures were taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.

To be fair, the tech industry has been heading in this direction for some time. Relying on armies of human beings to read through every piece of content on the web is expensive, time-consuming, and prone to error. But A.I. and machine learning are still nascent, despite the hype. Google itself, in the aforementioned blog posting, pointed out how its automated systems may flag the wrong videos. Facebook is also receiving criticism that its automated anti-spam system is whacking the wrong posts, including those thatoffer vital information on the spread of COVID-19.

If the COVID-19 crisis drags on, though, more companies will no doubt turn to automation as a potential solution to disruptions in their workflow and other processes. That will force a steep learning curve; again and again, the rollout of A.I. platforms has demonstrated that, while the potential of the technology is there, implementation is often a rough and expensive processjust look at Google Duplex.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Nonetheless, an aggressive embrace of A.I. will also create more opportunities for those technologists who have mastered A.I. and machine-learning skills of any sort; these folks may find themselves tasked with figuring out how to automate core processes in order to keep businesses running.

Before the virus emerged, BurningGlass (which analyzes millions of job postings from across the U.S.), estimated that jobs that involve A.I. would grow 40.1 percent over the next decade. That percentage could rise even higher if the crisis fundamentally alters how people across the world live and work. (The median salary for these positions is $105,007; for those with a PhD, it drifts up to $112,300.)

If youre trapped at home and have some time to learn a little bit more about A.I., it could be worth your time to explore online learning resources. For instance, theres aGooglecrash coursein machine learning. Hacker Noonalso offers an interesting breakdown ofmachine learningandartificial intelligence.Then theres Bloombergs Foundations of Machine Learning,a free online coursethat teaches advanced concepts such as optimization and kernel methods.

View post:
Will COVID-19 Create a Big Moment for AI and Machine Learning? - Dice Insights

Machine Learning Engineer Interview Questions: What You Need to Know – Dice Insights

Along with artificial intelligence (A.I.), machine learning is regarded as one of the most in-demand areas for tech employment at the moment. Machine learning engineers develop algorithms and models that can adapt and learn from data. As a result, those who thrive in this discipline are generally skilled not only in computer science and programming, but also statistics, data science, deep learning, and problem solving.

According to Burning Glass, which collects and analyzes millions of job postings from across the country, the prospects for machine learning as an employer-desirable skill are quite good, with jobs projected to rise 36.5 percent over the next decade. Moreover, even those with relatively little machine-learning experience can pull down quite a solid median salary:

Dice Insights spoke with Oliver Sulley, director of Edge Tech Headhunters, to figure out how you should prepare, what youll be asked during an interviewand what you should say to grab the gig.

Youre going to be faced potentially by bosses who dont necessarily know what it is that youre doing, or dont understand ML and have just been [told] they need to get it in the business, Sulley said. Theyre being told by the transformation guys that they need to bring it on board.

As he explained, that means one of the key challenges facing machine learning engineers is determining what technology would be most beneficial to the employer, and being able to work as a cohesive team that may have been put together on very short notice.

What a lot of companies are looking to do is take data theyve collected and stored, and try and get them to build some sort of model that helps them predict what they can be doing in the future, Sulley said. For example, how to make their stock leaner, or predicting trends that could come up over they year that would change their need for services that they offer.

Sulley notes that machine learning engineers are in rarified air at themomentits a high-demand position, and lots of companies are eager to show theyve brought machine learning specialists onboard.

If theyre confident on their skills, then a lot of the time they have to make sure the role is right for them, Sulley said. Its more about the soft skills that are going to be important.

Many machine learning engineers are strong on the technical side, but they often have to interact with teams such as operations; as such, they need to be able to translate technical specifics into laymans terms and express how this data is going to benefit other areas of the company.

Building those soft skills, and making sure people understand how you will work in a team, is just as important at this moment in time, Sulley added.

There are quite a few different roles for machine learning engineers, and so its likely that all these questions could come upbut it will depend on the position. We find questions with more practical experience are more common, and therefore will ask questions related to past work and the individual contributions engineers have made, Sulley said.

For example:

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

A lot of data engineering and machine learning roles involve working with different tech stacks, so its hard to nail down a hard and fast set of skills, as much depends on the company youre interviewing with.(If youre just starting out with machine learning, here are some resources that could prove useful.)

For example, if its a cloud based-role, a machine learning engineer is going to want to have experience with AWS and Azure; and for languages alone, Python and R are the most important, because thats what we see more and more in machine learning engineering, Sulley said. For deployment, Id say Docker, but it really depends on the persons background and what theyre looking to get into.

Sulley said ideal machine learning candidates posses a really analytical mind, as well as a passion for thinking about the world in terms of statistics.

Someone who can connect the dots and has a statistical mind, someone who has a head for numbers and who is interested in that outside of work, rather than someone who just considers it their job and what they do, he said.

As you can see from the following Burning Glass data, quite a few jobs now ask for machine-learning skills; if not essential, theyre often a nice to have for many employers that are thinking ahead.

Sulley suggests the questions you ask should be all about the technologyits about understanding what the companies are looking to build, what their vision is (and your potential contribution to it), and looking to see where your career will grow within that company.

You want to figure out whether youll have a clear progression forward, he said. From that, you will understand how much work theyre going to do with you. Find out what theyre really excited about, and that will help you figure out whether youll be a valued member of the team. Its a really exciting space, and they should be excited by the opportunities that come with bringing you onboard.

View original post here:
Machine Learning Engineer Interview Questions: What You Need to Know - Dice Insights

Self-driving truck boss: ‘Supervised machine learning doesnt live up to the hype. It isnt C-3PO, its sophisticated pattern matching’ – The Register

Roundup Let's get cracking with some machine-learning news.

Starksy Robotics is no more: Self-driving truck startup Starsky Robotics has shut down after running out of money and failing to raise more funds.

CEO Stefan Seltz-Axmacher bid a touching farewell to his upstart, founded in 2016, in a Medium post this month. He was upfront and honest about why Starsky failed: Supervised machine learning doesnt live up to the hype, he declared. It isnt actual artificial intelligence akin to C-3PO, its a sophisticated pattern-matching tool.

Neural networks only learn to pick up on certain patterns after they are faced with millions of training examples. But driving is unpredictable, and the same route can differ day to day, depending on the weather or traffic conditions. Trying to model every scenario is not only impossible but expensive.

In fact, the better your model, the harder it is to find robust data sets of novel edge cases. Additionally, the better your model, the more accurate the data you need to improve it, Seltz-Axmacher said.

More time and money is needed to provide increasingly incremental improvements. Over time, only the most well funded startups can afford to stay in the game, he said.

Whenever someone says autonomy is ten years away thats almost certainly what their thought is. There arent many startups that can survive ten years without shipping, which means that almost no current autonomous team will ever ship AI decision makers if this is the case, he warned.

If Seltz-Axmacher is right, then we should start seeing smaller autonomous driving startups shutting down in the near future too. Watch this space.

Waymo to pause testing during Bay Area lockdown: Waymo, Googles self-driving car stablemate, announced it was pausing its operations in California to abide by the lockdown orders in place in Bay Area counties, including San Francisco, Santa Clara, San Mateo, Marin, Contra Costa and Alameda. Businesses deemed non-essential were advised to close and residents were told to stay at home, only popping out for things like buying groceries.

It will, however, continue to perform rides for deliveries and trucking services for its riders and partners in Phoenix, Arizona. These drives will be entirely driverless, however, to minimise the chance of spreading COVID-19.

Waymo also launched its Open Dataset Challenge. Developers can take part in a contest that looks for solutions to these problems:

Cash prizes are up for grabs too. The winner can expect to pocket $15,000, second place will get you $5,000, while third is $2,000.

You can find out more details on the rules of the competition and how to enter here. The challenge is open until 31 May.

More free resources to fight COVID-19 with AI: Tech companies are trying to chip in and do what they can to help quell the coronavirus pandemic. Nvidia and Scale AI both offered free resources to help developers using machine learning to further COVID-19 research.

Nvidia is providing a free 90-day license to Parabricks, a software package that speeds up the process of analyzing genome sequences using GPUs. The rush is on to analyze the genetic information of people that have been infected with COVID-19 to find out how the disease spreads and which communities are most at risk. Sequencing genomes requires a lot of number crunching, Parabricks slashes the time needed to complete the task.

Given the unprecedented spread of the pandemic, getting results in hours versus days could have an extraordinary impact on understanding the viruss evolution and the development of vaccines, it said this week.

Interested customers who have access to Nvidias GPUs should fill out a form requesting access to Parabricks.

Nvidia is inviting our family of partners to join us in matching this urgent effort to assist the research community. Were in discussions with cloud service providers and supercomputing centers to provide compute resources and access to Parabricks on their platforms.

Next up is Scale AI, the San Francisco based startup focused on annotating data for machine learning models. It is offering its labeling services for free to any researcher working on a potential vaccine, or on tracking, containing, or diagnosing COVID-19.

Given the scale of the pandemic, researchers should have every tool at their disposal as they try to track and counter this virus, it said in a statement.

Researchers have already shown how new machine learning techniques can help shed new light on this virus. But as with all new diseases, this work is much harder when there is so little existing data to go on.

In those situations, the role of well-annotated data to train models o diagnostic tools is even more critical. If you have a lot of data to analyse and think Scale AI could help then apply for their help here.

PyTorch users, AWS has finally integrated the framework: Amazon has finally integrated PyTorch support into Amazon Elastic Inference, its service that allows users to select the right amount of GPU resources on top of CPUs rented out in its cloud services Amazon SageMaker and Amazon EC2, in order to run inference operations on machine learning models.

Amazon Elastic Inference works like this: instead of paying for expensive GPUs, users select the right amount of GPU-powered inference acceleration on top of cheaper CPUs to zip through the inference process.

In order to use the service, however, users will have to convert their PyTorch code into TorchScript, another framework. You can run your models in any production environment by converting PyTorch models into TorchScript, Amazon said this week. That code is then processed by an API in order to use Amazon Elastic Inference.

The instructions to convert PyTorch models into the right format for the service have been described here.

Sponsored: Webcast: Why you need managed detection and response

More:
Self-driving truck boss: 'Supervised machine learning doesnt live up to the hype. It isnt C-3PO, its sophisticated pattern matching' - The Register

Data to the Rescue! Predicting and Preventing Accidents at Sea – JAXenter

Watch Dr. Yonit Hoffman's Machine Learning Conference session

Accidents at sea happen all the time. Their costs in terms of lives, money and environmental destruction are huge. Wouldnt it be great if they could be predicted and perhaps prevented? Dr. Yonit Hoffmans Machine Learning Conference session discusses new ways of preventing sea accidents with the power of data science.

Does machine learning hold the key to preventing accidents at sea?

With more than 350 years of history, the marine insurance industry is the first data science profession to try to predict accidents and estimate future risk. Yet the old ways no longer work, new waves of data and algorithms can offer significant improvements and are going to revolutionise the industry.

In her Machine Learning Conference session, Dr. Yonit Hoffman will show that it is now possible to predict accidents, and how data on a ships behaviour such as location, speed, maps and weather can help. She will show how fragments of information on ship movements can be gathered and taken all the way to machine learning models. In this session, she discusses the challenges, including introducing machine learning to an industry that still uses paper and quills (yes, really!) and explaining the models using SHAP.

Dr. Yonit Hoffman is a Senior Data Scientist at Windward, a world leader in maritime risk analytics. Before investigating supertanker accidents, she researched human cells and cancer at the Weizmann Institute, where she received her PhD and MSc. in Bioinformatics. Yonit also holds a BSc. in computer science and biology from Tel Aviv University.

Visit link:
Data to the Rescue! Predicting and Preventing Accidents at Sea - JAXenter

The Well-matched Combo of Quantum Computing and Machine Learning – Analytics Insight

The pace of improvement in quantum computing mirrors the fast advances made in AI and machine learning. It is normal to ask whether quantum technologies could boost learning algorithms: this field of inquiry is called quantum-improved machine learning.

Quantum computers are gadgets that work dependent on principles from quantum physics. The computers that we at present use are constructed utilizing transistors and the information is stored as double 0 and 1. Quantum computers are manufactured utilizing subatomic particles called quantum bits, qubits for short, which can be in numerous states simultaneously. The principal advantage of quantum computers is that they can perform exceptionally complex tasks at supersonic velocities. In this way, they take care of issues that are not presently feasible.

The most significant advantage of quantum computers is the speed at which it can take care of complex issues. While theyre lightning speedy at what they do, they dont give abilities to take care of issues from undecidable or NP-Hard problem classes. There is a problem set that quantum computing will have the option to explain, anyway, its not applicable for all computing problems.

Ordinarily, the issue set that quantum computers are acceptable at solving includes number or data crunching with an immense amount of inputs, for example, complex optimisation problems and communication systems analysis problemscalculations that would normally take supercomputers days, years, even billions of years to brute force.

The application that is routinely mentioned as an instance that quantum computers will have the option to immediately solve is solid RSA encryption. A recent report by the Microsoft Quantum Team recommends this could well be the situation, figuring that itd be feasible with around a 2330 qubit quantum computer.

Streamlining applications leading the pack makes sense well since theyre at present to a great extent illuminated utilizing brute force and raw computing power. If quantum computers can rapidly observe all the potential solutions, an ideal solution can become obvious all the more rapidly. Streamlining stands apart on the grounds that its significantly more natural and simpler to get a hold on.

The community of people who can fuse optimization and robust optimization is a whole lot bigger. The machine learning community, the coinciding between the innovation and the requirements are technical; theyre just pertinent to analysts. Whats more, theres a much smaller network of statisticians on the planet than there are of developers.

Specifically, the unpredictability of fusing quantum computing into the machine learning workflow presents an impediment. For machine learning professionals and analysts, its very easy to make sense of how to program the system. Fitting that into a machine learning workflow is all the more challenging since machine learning programs are getting very complex. However, teams in the past have published a lot of research on the most proficient method to consolidate it in a training workflow that makes sense.

Undoubtedly, ML experts at present need another person to deal with the quantum computing part: Machine learning experts are searching for another person to do the legwork of building the systems up to the expansions and demonstrating that it can fit.

In any case, the intersection of these two fields goes much further than that, and its not simply AI applications that can benefit. There is a meeting area where quantum computers perform machine learning algorithms and customary machine learning strategies are utilized to survey the quantum computers. This region of research is creating at such bursting speeds that it has produced a whole new field called Quantum Machine Learning.

This interdisciplinary field is incredibly new, however. Recent work has created quantum algorithms that could go about as the building blocks of machine learning programs, yet the hardware and programming difficulties are as yet significant and the development of fully functional quantum computers is still far off.

The future of AI sped along by quantum computing looks splendid, with real-time human-imitable practices right around an inescapable result. Quantum computing will be capable of taking care of complex AI issues and acquiring multiple solutions for complex issues all the while. This will bring about artificial intelligence all the more effectively performing complex tasks in human-like ways. Likewise, robots that can settle on optimised decisions in real-time in practical circumstances will be conceivable once we can utilize quantum computers dependent on Artificial Intelligence.

How away will this future be? Indeed, considering just a bunch of the worlds top organizations and colleges as of now are growing (genuinely immense) quantum computers that right now do not have the processing power required, having a multitude of robots mirroring humans running about is presumably a reasonable way off, which may comfort a few people, and disappoint others. Building only one, however? Perhaps not so far away.

Quantum computing and machine learning are incredibly well matched. The features the innovation has and the requirements of the field are extremely close. For machine learning, its important for what you have to do. Its difficult to reproduce that with a traditional computer and you get it locally from the quantum computer. So those features cant be unintentional. Its simply that it will require some time for the people to locate the correct techniques for integrating it and afterwards for the innovation to embed into that space productively.

Read more here:
The Well-matched Combo of Quantum Computing and Machine Learning - Analytics Insight

With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

Originally published in TechCrunch, March 16, 2020

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

To continue reading this article, click here.

Read more:
With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data...

University Students Are Learning To Collaborate on AI Projects – Forbes

Penn States Nittany AI Challenge is teaching students the true meaning of collaboration in the age of Artificial Intelligence.

Nittany AI Challenge LionPlanner

This year, artificial intelligence is the buzzword. On university campuses, students who just graduated high school are checking out the latest computer science course offerings to see if they can take classes in machine learning. The truth about the age of Artificial Intelligence has caught many university administrators attention. In the age of AI, to be successful, everyone, no matter what jobs, skill sets, or majors will at some point encounter AI in their work and their life.Penn Statesaw the benefits of working on AI projects early, specifically when it comes to teamwork and collaboration. Since 2017, their successfulNittany AI Challengeeach year, has helped to teach students what it means to collaborate in the age of Artificial Intelligence.

Every university has challenges. Students bring a unique perspective and understanding of these challenges. The Nittany AI Challenge was created to provide a framework and support structure to enable students to form teams and collaborate on ideas that could address a problem or opportunity, using AI technology as the enabler. The Nittany AI Challenge is our innovation engine, ultimately putting students on stage to demonstrate how AI and machine learning can be leveraged to have a positive impact on the university.

The Nittany AI Challenge runs for 8 months each year. It has multiple phases such as the idea phase, the prototype phase, and the MVP phase. In the end, theres a pitch competition between 5 to 10 teams and they compete for a pool of $25,000. The challenge incentivizes students to keep going by awarding the best teams at each phase of the competition with another combined total of $25,000 during the 8 months of competition. By the time pitching comes around for the top 5 to 10 teams, these teams not only have figured out how they can work together as a team, but they have also experienced what it means to receive funding.

This year, the Nittany AI Challenge has expanded from asking students to solve the universitys problems using AI to broader categories based on the theme of AI for Good. Students are competing in additional categories such as humanitarianism, healthcare, and sustainability/climate change.

In the last two years, students formed teams amongst friends within their circle. But, as the competition matured, now, theres an online system that allows students to sign up for teams.

Students often form teams with students coming from different backgrounds and majoring in different disciplines based on their shared interest on a project. Christie Warren, the app designer from theLionPlanner team, helped her team to create a 4-year degree planning tool that won 2018s competition. She credits the competition for giving her a clear path to a career in app design and teaching her how to collaborate with developers.

For me the biggest learning curve is to learn to work alongside developers, as far as when to start to go into the high fidelity designs, wait for people to figure out the features that need to be developed, etc. Just looking at my designs and being really open to improvements and going through iterations of the design with the team helped me overcome the learning curve.

Early on, technology companies such as Microsoft, Google Cloud, IBM Watson, and Amazon Web Services recognized the value of an on-campus AI competition such as the Nittany AI Competition to provide teamwork education to students before they embark on internships with technology companies. Theyve been sponsoring the competition since its inception.

Both the students and us from Microsoft benefit from the time working together, in that we learn about each others culture, needs and aspirations. Challenges like the Nittany AI Challenge highlight that studying in Higher Education should be a mix of learning and fun. If we can help the students learn and enjoy the experience then we also help them foster a positive outlook about their future of work.

While having fun, some students like Michael D. Roos, project manager and backend developer from the LionPlanner team have seen synergy between his internships and his project for the Nittany AI competition. He credits the competition as giving him a pathway to success beyond simply a college education. Hes a lot more confident stepping out into the real world whether its working for a startup or a large technology company because of the experience gained.

I was doing my internship with Microsoft during a part of the competition. Some of the technology skills I learned at my internship I could then apply to my project for the competition. Also, having the cumulative experience of working on the project for the Nittany AI competition before going into my internship helped me with my internship. Even though I was interning at Microsoft, my team had similar startup vibes as the competition, my role on the team was similar to my role on the project. I felt I had a headstart in that role because of my experience in the competition.

One of the biggest myths that the Nittany AI Challenge helped to debunk is that AI implementations require only the skills of technologists. While computer science students who take a keen interest in machine learning and AI are central to every project inside the Nittany AI Challenge, its often the people who are the visionary project managers, creative designers, and students who are majoring in other disciplines such as healthcare, biological sciences, and business who end up making the most impactful contributions to the team.

The AI Alliance makes the challenge really accessible. For people like me who dont know AI, we can learn AI along the way.

The LionPlanner Team that won the competition in 2018 contributes their success mainly to the outstanding design that won over the judges. Christie, the app designer on the team credits her success to the way the team collaborated which enabled her to communicate with developers effectively.

Nyansapo_Team_pic

Every member of the Nyansapo Team that is trying to bring English education to remote parts of Kenya via NLP learning software contributes their success to the energy and the motivation behind the vision of the project. Because everyone felt strongly about the vision, even though they have one of the biggest teams in the competition, everyones pulling together and collaborating.

I really like to learn by doing. Everybody on the team joined, not just because they had something to offer, but because the vision was exciting. We are all behind this problem of education inequality in Kenya. We all want to get involved to solve this problem. We are this excited to want to go the extra step.

Not only does the Nittany AI challenge teach students the art of interdisciplinary collaboration, but it also teaches students time management, stress management, and how to overcome difficulties. During the competition, students are often juggling difficult coursework, internships, and other extracurricular activities. They often feel stressed and overwhelmed. This can pose tremendous challenges for team communication. But, as many students pointed out to me, these challenges are opportunities to learn how to work together.

There was a difficult moment yesterday in between my classes, where I had to schedule a meeting with Edward to discuss the app interface later during the day, at times, everything can feel a bit chaotic. In the back of my head, when I think about the vision of our project, how much Im learning on the project, and how Im working with all my friends, these are the things that keep me going even through hard times.

One of the projects from the Nittany AI Challenge that the university is integrating into their systems is the LionPlanner tool. It uses AI algorithms to help students match their profiles with clubs and extracurricular activities they might be interested in. It also helps students plan their courses to customize their degree to allow them to complete on time while keeping the cost of their degree as low as possible.

The students who worked on the project are now working to create a Prospective Student Planning Tool that can integrate into the University Admissions Office systems to be used by transfer students.

Currently, in the U.S., theres a skill gap of almost 1.5 million high tech jobs. Companies are having a hard time hiring people who have the skills to work in innovative companies. We now have coding camps, apprenticeships, and remote coding platforms.

Why not also have university-sponsored AI challenges where students can demonstrate their potential and abilities to collaborate?

The Nittany AI Challenge from Penn State presents a unique solution in the age of innovation that many employees are trying to solve. By sitting in the audience as judges, companies can follow the teams progress and watch students shine in their perspective areas. Students are not pitching their skills. Students are pitching their work products. They are showing what they can do in real-time for 8 months.

This could be a new way for companies to recruit. We have NFL drafts. Why not have drafts for star players on these AI teams that work especially well with others?

This year, Penn State introduced the Nittany AI Associates program where students can continue their work from the Nittany AI Challenge so that they can develop their ideas further.

So while thechallengeis the "Innovation Engine", theNittanyAIAssociates provides students the opportunity to work on managed projects with an actual client, funding to the students to reduce their debt (paid internships), a low cost, low risk avenue for the university (and other clients) to innovate, while providingAIknowledge transfer to client staff (the student becomes the teacher).

In the age of AI, education is becoming more multidisciplinary. When higher education institutions can evolve the way that they teach their students to enable both innovation and collaboration, then the potential they unleash in their graduates can have an exponential effect on their career and the companies that hire them. Creating competitions and collaborative work projects such as the Nittany AI Challenge within the university that fosters win-win thinking might just be the key to the type of innovations we need in higher education to keep up in the age of AI.

Originally posted here:
University Students Are Learning To Collaborate on AI Projects - Forbes