Encryption on Facebook, Google, others threatened by planned new bill – Reuters

WASHINGTON (Reuters) - U.S. legislation will be introduced in the coming weeks that could hurt technology companies ability to offer end-to-end encryption, two sources with knowledge of the matter said, and it aims to curb the distribution of child sexual abuse material on such platforms.

FILE PHOTO: FILE PHOTO: An encryption message is seen on the WhatsApp application on an iPhone, March 27, 2017. REUTERS/Phil Noble

The bill, proposed by the Chairman of the Senate Judiciary Committee Lindsey Graham and Democratic Senator Richard Blumenthal, aims to fight such material on platforms like Facebook (FB.O) and Alphabets Googles (GOOGL.O) by making them liable for state prosecution and civil lawsuits. It does so by threatening a key immunity the companies have under federal law called Section 230.

This law shields certain online platforms from being treated as the publisher or speaker of information they publish, and largely protects them from liability involving content posted by users.

The bill, titled The Eliminating Abuse and Rampant Neglect of Interactive Technologies Act of 2019, or the EARN IT Act, threatens this key immunity unless companies comply with a set of best practices, which will be determined by a 15-member commission led by the Attorney General.

The move is the latest example of how regulators and lawmakers in Washington are reconsidering the need for incentives that once helped online companies grow, but are increasingly viewed as impediments to curbing online crime, hate speech and extremism.

The sources said the U.S. tech industry fears these best practices will be used to condemn end-to-end encryption - a technology for privacy and security that scrambles messages so that they can be deciphered only by the sender and intended recipient. Federal law enforcement agencies have complained that such encryption hinders their investigations.

Online platforms are exempted from letting law enforcement access their encrypted networks. The proposed legislation provides a workaround to bypass that, the sources said.

This a deeply dangerous and flawed piece of legislation that will put every Americans security at risk... it is deeply irresponsible to try to undermine security for online communications, said Jesse Blumenthal, who leads technology and innovation at Stand Together, also known as the Koch network -funded by billionaire Charles Koch. The group sides with tech companies that have come under fire from lawmakers and regulators in Washington.

There is no such thing as a back door just for good guys that does not create a front door for bad guys, Blumenthal said.

On Wednesday, U.S. Attorney General William Barr questioned whether Facebook, Google and other major online platforms still need the immunity from legal liability that has prevented them from being sued over material their users post.

During a Senate Judiciary hearing on encryption in December, a bipartisan group of senators warned tech companies that they must design their products encryption to comply with court orders. Senator Graham issued a warning to Facebook and Apple: This time next year, if we havent found a way that you can live with, we will impose our will on you.

A spokeswoman for Senator Graham said on timing, other details, we dont have anything more to add right now. She pointed Reuters to recent comments by the senator saying the legislation is not ready but getting close.

A spokeswoman for Senator Blumenthal said he was encouraged by the progress made by the bill.

A discussion draft of the EARN IT Act has been doing the rounds and has been criticized by technology companies.

Facebook and Google did not respond to requests for comment.

Reporting by Nandita Bose in Washington; Editing by Bernadette Baum

Read the original post:
Encryption on Facebook, Google, others threatened by planned new bill - Reuters

What Is an Encryption Backdoor? – How-To Geek

deepadesigns/Shutterstock

You might have heard the term encryption backdoor in the news recently. Well explain what it is, why its one of the most hotly contested topics in the tech world, and how it could affect the devices you use every day.

Most of the systems consumers use today have some form of encryption. To get past it, you have to provide some kind of authentication. For example, if yourphone is locked, you have to use a password, your fingerprint, or facial recognition to access your apps and data.

These systems generally do an excellent job of protecting your personal data. Even if someone takes your phone, he cant gain access to your information unless he figures out your passcode. Plus, most phones can wipe their storage or become unusable for a time if someone tries to force them to unlock.

A backdoor is a built-in way of circumventing that type of encryption. It essentially allows a manufacturer to access all the data on any device it creates. Andits nothing newthis reaches all the way back to the abandoned Clipper chip in the early 90s.

Many things can serve as a backdoor. It can be a hidden aspect of the operating system, an external tool that acts as a key for every device, or a piece of code that creates a vulnerability in the software.

RELATED: What Is Encryption, and How Does It Work?

In 2015, encryption backdoors became the subject of a heated global debate when Apple and the FBI wereembroiled in a legal battle. Through a series of court orders, the FBI compelled Apple to crack an iPhone that belonged to a deceased terrorist. Apple refused to create the necessary software and a hearing was scheduled. However, the FBI tapped a third-party (GrayKey), which used a security hole to bypass the encryption and the case was dropped.

The debate has continued among technology firms and in the public sector. When the case first made headlines, nearly every major technology company in the U.S. (including Google, Facebook, and Amazon) supported Apples decision.

Most tech giants dont want the government to compel them to create an encryption backdoor. They argue that a backdoor makes devices and systems significantly less secure because youre designing the system with a vulnerability.

While only the manufacturer and the government would know how to access the backdoor at first, hackers and malicious actors would eventually discover it. Soon after, exploits would become available to many people. And if the U.S. government gets the backdoor method, would the governments of other countries get it, too?

This creates some frightening possibilities. Systems with backdoors would likely increase the number and scale of cybercrimes, from targeting state-owned devices and networks to creating a black market for illegal exploits. As Bruce Schneier wrote in The New York Times,it also potentially opens up critical infrastructure systems that manage major public utilities to foreign and domestic threats.

Of course, it also comes at the cost of privacy. An encryption backdoor in the hands of the government allows them to look at any citizens personal data at any time without their consent.

Government and law enforcement agencies that want an encryption backdoor argue that the data shouldnt be inaccessible to law enforcement and security agencies. Some murder and theft investigations have stalled because law enforcement was unable to access locked phones.

The information stored in a smartphone, like calendars, contacts, messages, and call logs, are all things a police department might have the legal right to search with a warrant. The FBI said it faces a Going Dark challenge as more data and devices become inaccessible.

Whether companies should create a backdoor in their systems remains a significant policy debate. Lawmakers and public officials frequently point out that what they really want is a front door that allows them to request decryption under specific circumstances.

However, a front door and encryption backdoor are largely the same. Both still involve creating an exploit to grant access to a device.

Until an official decision is rendered, this issue will likely continue to pop up in the headlines.

View original post here:
What Is an Encryption Backdoor? - How-To Geek

Russia’s War On Encryption Stumbles Forth With Ban Of Tutanota – Techdirt

from the what-are-you-so-afraid-of dept

The Russian government continues to escalate its war on encrypted services and VPNs. For years now, Putin's government has slowly but surely taken steps to effectively outlaw secure communications, framing the restrictions as essential for national security, with the real goal of making it harder than ever for Russian citizens to dodge the Putin government's ever-expanding surveillance ambitions.

The latest case in point: starting last Friday, the Russian government banned access to encrypted email service Tutanota, without bothering to provide the company with much of any meaningful explanation:

In a blog post, the company notes that Tutanota has been blocked in Egypt since October of last year, and that impacted users should attempt to access the service via a VPN or the Tor browser:

"Encrypted communication is a thorn in the side to authoritarian governments like Russia as encryption makes it impossible for security services to eavesdrop on their citizens. The current blocking of Tutanota is an act against encryption and confidential communication in Russia.

...We condemn the blocking of Tutanota. It is a form of censorship of Russian citizens who are now deprived of yet another secure communication channel online. At Tutanota we fight for our users right to privacy online, also, and particularly, in authoritarian countries such as Russia and Egypt.

Except VPNs have been under fire in Russia for years as well. Back in 2016 Russia introduced a new surveillance bill promising to deliver greater security to the country. Of course, as with so many similar efforts around the world the bill actually did the exact opposite -- not only mandating new encryption backdoors, but also imposing harsh new data-retention requirements on ISPs and VPN providers forced to now register with the government. As a result, some VPN providers, like Private Internet Access, wound up leaving the country after finding their entire function eroded and having some of their servers seized.

Last year Russia upped the ante, demanding that VPN providers like NordVPN, ExpressVPN, IPVanish, and HideMyAss help block forbidden websites that have been added to Russia's censorship watchlist. And last January, ProtonMail (and ProtonVPN) got caught up in the ban as well after it refused to play the Russian government's registration games. While Russian leaders want the public to believe these efforts are necessary to ensure national security, they're little more than a giant neon sign advertising Russian leaders' immense fear of the Russian public being able to communicate securely.

Filed Under: encryption, russiaCompanies: tutanota

The rest is here:
Russia's War On Encryption Stumbles Forth With Ban Of Tutanota - Techdirt

Newspaper Lobbyists and Encryption Foes Join the Chorus Against Section 230 – Reason

The Department of Justice has joined the campaign against Section 230, the federal law that enables the internet as we know it. Its effort is probably part of Washington's ongoing battle against encrypted communications. And legacy news media companies are apparently all to happy to help them in this fight.

On Wednesday, the U.S. Department of Justice held a "public workshop" on Section 230. Predictably, it wound being up a greatest hits of the half-truths and paranoid bellyaching commonly employed against this important law.

Section 230 prevents digital companies from being automatically treated as the speaker of any third-party speech they assist in putting online. It also allows companies to moderate content without becoming liable for it. The law was passed in 1996 to address the fact that the then-dominant web companies felt forced to choose between very strictly gate-keeping or allowing a free-for-all if they wanted to avoid civil lawsuits and criminal liability over user-generated speech.

Section 230 has never prevented the Justice Department from enforcing federal criminal statutes against online violators, as many have misleadingly argued. (For a quick debunking of more Section 230 myths, see this video.) It acts as a shield against civil lawsuits and against state and local criminal charges.

U.S. Attorney General Bill Barr opened the event yesterday by saying that "criminals and bad actors now use technology to facilitate and expand the scope of their wrongdoing and the victimization of our fellow citizens."

This is the same line of talk Barr has used against encrypted communication.

Barr invoked child exploitation as one reason to reexamine Section 230. But the statute was passed explicitly to address this issue, as part of a larger law concerning "communication decency" and online pornography. It provides the legal framework that allows companies to actually try to keep exploitative content offline. And nothing in Section 230 prevents the enforcement of federal laws against child pornography and other forms of sexual exploitation.

"Section 230 has never prevented federal criminal prosecution of those who traffic in [child sexual abuse material]as more than 36,000 individuals were between 2004 and 2017," points out Berin Szoka in a post dissecting draft antiSection 230 legislation proposed by Sen. Lindsey Graham (RS.C.). Graham's bill would amend Section 230 to lower the standard for legal liability, so tech companies needn't "knowingly" aid in the transmission of illegal content to be found guilty in civil suits and state criminal prosecutions; they'd merely have to be deemed to have acted "recklessly" in such matters as content moderation or product design. The legislation would also create a presidential commission to offer "best practices" on this front. Taken together, Szoka sees this as a back door to banning end-to-end encryption by declaring it reckless. (More on that bill from First Amendment lawyer Eric Goldman here.)

Barr's remarks yesterday didn't explicitly mention giving government backdoors to spy on people. Instead, he played up several popular (and wrong) arguments against Section 230, such as the claim that it's responsible for "big tech" restricting online speech or that it prevents us from having "safer online spaces." Lurking in these comments is the schizophrenic proposition girding a lot of Section 230 opposition: that getting rid of it would somehow permit freer speech online and keep online spaces "safer" and more palatable for everyone.

Barr also engaged in the kind of social media exceptionalism common among Section 230 critics, insisting that online platforms today are so radically different than their predecessors as to warrant different rules. In doing so, he suggested that walled-off internet services like AOL had less control over content than their current counterparts and implied that Section 230 only protects social platforms and "big tech" companies.

In reality, Section 230 applies to even the smallest companies and groups (and is more important for ensuring their existence than it is for big companies, whose army of lawyers and moderators have a better chance of weathering a post-230 onslaught of lawsuits from users). And it applies to many types of digital entities, including behind-the-scenes web architecture (such as blogging platforms and email newsletter software), consumer review websites, crowdfunding apps, podcast networks, independent message-boards, dating platforms, digital marketing tools, email providers, and many more.

Barr said Wednesday that the Justice Department was "not here to advocate for a position." Yet everything else in his speech suggested otherwise, including his waxing about how civil lawsuits against tech companies (of the sort disallowed by Section 230) could "work hand-in-hand with the department's law enforcement efforts."

He concluded the talk by saying "we must remember that the goal of firms is to maximize profit, while the mission of government is to protect American citizens and society."

So: tech companies bad, government good. Got that?

Not everyone in Washington buys this simplistic argument, thank goodness. In a recent Washington Post op-ed, Sen. Ron Wyden (DOre.), who co-authored Section 230, explains how the law protects individual speech rights and pointed out that major media and tech companies have in fact been working with regulators against the law.

"Occasionally," writes Wyden, "Congress actually passes a law that protects the less powerful elements of our society, the insurgents and the disrupters. That's what it did in 1996 when it passed [Section 230]." He explains that the law "was written to provide legal protection to online platforms so they could take down objectionable material without being dragged into court."

"Without 230, social media couldn't exist," adds Wyden. Neither could movements like Black Lives Matter or #MeToo. "Whenever laws are passed to put the government in control of speech, the people who get hurt are the least powerful in society."

People often pretend government regulation of speech is somehow neutral. But defining permissible speech can change greatly depending on subjective and partisan priorities. Without Section 230, what online content is permissible and who gets punished would be determined not by an array of private companies but by a centralized political institution with the power to imprison, not just deplatform.

"I'm certain this administration would use power to regulate speech to punish its enemies and protect its allies," writes Wyden at the Post. "It would threaten Facebook or YouTube for taking down white supremacist content. It would label Black Lives Matter activists as purveyors of hate."

A Democratic administration would approve and disfavor different sorts of speech. But we would still have a partisan and centralized command over the bounds of online communication. And either way, the spoils will go to the big tech companies that are best able to lobby, contribute, curry favor, or otherwise game the system.

Powerful entities like Facebook, Disney, and IBM are all fighting to re-write the rules for digital speech in their favor. A recent New York Times article detailed how the fight against 230 is being led by a coalition of old media companies resentful of Google, Facebook, etc. and other corporations whose business has been bit into by digital tools. For instance, Marriott has been campaigning against Section 230 as a way to stick it to vacation rental platforms like Airbnb.

"The easiest lever to hurt tech companies that a lot of people see is 230," Stanford Law School professor Daphne Keller told the Times.

Mike Masnick suggests this illustrates the "concept of political entrepreneurs v. market entrepreneurs. One of them builds better, more innovative products that increase consumer welfare and increase the overall size of the pie by making things people want. The other uses its enormous power and political connections to pass regulations that hinder competitors who have innovated."

The companies now opposing Section 230 are "the legacy companies which have fallen behind, which have not adapted, and which are using their political will to try to suppress and destroy the open systems that the rest of us now depend on," Masnick writes.

One such example from this week is the News Media Alliance, formerly known as the Newspaper Association of America, which "represents approximately 2,000 news organizations across the United States and Europe." At the Justice Department's Wednesday workshop, the group's president, David Chavern, testified that "Section 230 has created a deeply distorted variable liability marketplace for media." This, he said, is bad not just "for news publishing but for the health of our society."

Chavern insisted this wasn't merely about news industry profits. But he ended his testimony by endorsing a "Journalism Competition & Preservation Act," which he said "would allow news publishers to collectively negotiate with the platforms and return value back to professional journalism," whichsure makes it sound like this is about news industry profits.

And when entrenched industry profits line up with the feds' surveillance agenda? That's when we're invited to kiss the open internet goodbye.

Go here to read the rest:
Newspaper Lobbyists and Encryption Foes Join the Chorus Against Section 230 - Reason

Last Week In Venture: Eyes As A Service, Environmental Notes And Homomorphic Encryption – Crunchbase News

Hello, and welcome back to Last Week In Venture, the weekly rundown of deals that may have flown under your radar.

There are plenty of companies operating outside the unicorn and public company spotlight, but that doesnt mean their stories arent worth sharing. They offer a peek around the corner at whats coming next, and what investors today are placing bets on.

Without further ado, lets check out a few rounds from the week that was in venture land.

I dont know how youre reading this, but you are. Most of us read with our eyes, but some read with their ears or their fingers. Blind people frequently have options when it comes to reading, but theres more to life than just reading.

Imagine going to a grocery store and stepping up to the bakery counter. You might be able to read a label with your eyes, but if theres no label you could still probably figure out what type bread youre buying based on its color and shape. But what if you couldnt see (or see well)? What are you going to do, touch all the bread to figure out its size and shape? Get real down low and smell em all? (Which, for the record, sounds lovely, if a little unhygienic.)

Youd probably ask someone who can see for some help. Thats the kind of interaction a service like Be My Eyes facilitates. Headquartered in San Francisco, the startup founded in 2014 connects blind people and people with low vision to sighted volunteers over on-demand remote video calls facilitated through the companys mobile applications for Android and iOS. The sighted person can see whats going on, and offer real time support for the person who cant see.

The company announced this week that it raised $2.8 million in a Series A funding round led by Cultivation Capital. In 2018, Be My Eyes launched a feature called Specialized Help, which connects blind and low-vision people to service representatives at companies. Microsoft, Google, Lloyds Banking Group and Procter & Gamble are among the companies enrolled in the program.

Be My Eyes initially launched as an all-volunteer effort. The company says it has a community of more than 3.5 million sighted volunteers helping almost 200,000 visually impaired people worldwide. According to Crunchbase data, the company has raised over $5.3 million in combined equity and grant funding.

The environment is, like, super important. Its the air we breathe and the water we drink. Regardless of your opinion on environmental regulations, most come from a good place: Ensuring the long-term sustainability of life on a planet with finite resources by putting a check on destructive activity. Where theres regulation, theres a need to comply with it, and compliance can be kind of a drag. There is a lot of paperwork to do.

Wildnote is a company based in San Luis Obispo, California. Its in the business of environmental data collection, management and reporting using its eponymous mobile application and web platform. Field researchers and compliance professionals can capture and record information (including photos) on-site using either standard reporting forms or their own custom workflows. The companys data platform also features export capabilities, which produce PDFs or raw datasets in multiple formats.

The company announced $1.35 million in seed funding from Entrada Ventures and HG Ventures, the corporate venture arm of The Heritage Group. Wildnote was part of the 2019 cohort of The Heritage Groups accelerator program, produced in collaboration with Techstars, which aimed to assist startups working on problems from legacy industries like infrastructure, materials and environmental services.

Encryption uses math to transform information humans and machines can read and understand into information that we cant. Encrypted data can be decrypted by those in possession of a cryptographic key. To everyone else, encrypted data is just textual gobbledegook.

The thing is, to computers, encrypted data is also textual gobbledegook. Computer scientists and cryptographers have long been looking for a way to work with encrypted data without needing to decrypt it in the process. Homomorphic encryption has been a subject of academic research and corporate research and development labs for years, but it appears a commercial homomorphic encryption product has hit the market, and the company behind it is raising money to grow.

The company were talking about here is Enveil. Headquartered in Fulton, Maryland, the company makes software it calls ZeroReveal. Its ZeroReveal Search product allows customers to encrypt and store data while also enabling users to perform searches directly against ciphertext data, meaning that data stays secure. Its ZeroReveal Compute Fabric offers client- and server-side applications which let enterprises securely operate on encrypted data stored on premises, in a large commercial cloud computing platform, or obtained from third parties.

Enveil raised $10 million in its Series A round, which was led by C5 Capital. Participating investors include 1843 Capital, Capital One Growth Ventures, MasterCard and Bloomberg Beta. The company was founded in 2014 by Ellison Anne Williams and has raised a total of $15 million; prior investors include cybersecurity incubator DataTribe and In-Q-Tel, the nonprofit venture investment arm of the U.S. Central Intelligence Agency.

Image Credits: Last Week In Venture graphic created byJD Battles. Photo by Daniil Kuzelev, via Unsplash.

Read more:
Last Week In Venture: Eyes As A Service, Environmental Notes And Homomorphic Encryption - Crunchbase News

Sophos Takes On Encrypted Network Traffic With New XG Firewall 18 – CRN: Technology news for channel partners and solution providers

Sophos has debuted a new version of its XG Firewall that provides visibility into previously unobservable transport mechanisms while retaining high levels of performance.

The Oxford, U.K.-based platform security vendor will make it more difficult for adversaries to hide information in different protocols by inspecting all encrypted traffic with the XG Firewall 18, according to Chief Product Officer Dan Schiappa. Adversaries are turning to encryption in their exploits, with 23 percent of malware families using encrypted communication for command and control or installation.

Weve kind of turned the light on in a kitchen full of roaches, Schiappa told CRN.

[Related: 10 Things To Know About The Planned $3.82 Billion Thoma Bravo-Sophos Deal]

Pricing for the Sophos XG Firewall starts at $359 per year and scales based on term length and model, according to the company. The performance of the XG Firewall has been vastly improved by better determining which applications and traffic should go through the companys deep packet inspection engine, according to Schiappa.

By leveraging SophosLabs intelligence, the company is able to rapidly push safe or known traffic through while quarantining only the unknown or unsafe traffic for deep packet inspection, he said. The XG Firewall will also be easier to manage in Sophos Central with better alert engines and reporting capabilities, according to Schiappa.

Sophos Central now has full firewall management capabilities, meaning that customers can apply policies universally across multiple firewalls from the central dashboard and granularly adjust settings for a specific firewall from the same location. In addition, synchronized app control has strengthened the sharing of information between the endpoint and the firewall, Schiappa said.

The company has been working on the XG Firewall 18 for more than two years, he said, and considers it to be the most transformative version of the XG thanks to the new Xstream architecture.

We really wanted to build the firewall without any historical backdrop, Schiappa said. Well have the most next-gen and recent firmware OS on the market, and that was something that was important to us.

The improvements Sophos has made around security and performance combined with the vast gains in its natural rules engine will make the XG Firewall much more credible to enterprises, according to Schiappa. Adding enterprise management functionality also will help Sophos attract larger customers at a much higher rate than in the past, Schiappa said.

We now have an enterprise-credible firewall, but were never going to abandon our sweet spot in the SMB and midmarket, he said.

Existing Sophos customers will get the XG Firewall 18 as part of the normal upgrade process without any type of new license required, according to Schiappa. Customers will be notified when the Xstream architecture is available for their model of firewall.

The growth of Sophos Central and embrace of synchronized security have dramatically increased the number of Sophos products being used by the average customer, according to Schiappa. Although the XG Firewall 18 is a great stand-alone product, it also represents a golden opportunity for channel partners to expand their footprint with endpoint-focused customers into the network.

This was a big effort, and I think its going to be worth it, he said.

See the original post:
Sophos Takes On Encrypted Network Traffic With New XG Firewall 18 - CRN: Technology news for channel partners and solution providers

What is machine learning? – Brookings

In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term artificial intelligence to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observationsto demonstrate, that is, an innate intelligence.

The question was how to achieve that goal. Early efforts focused primarily on whats known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950sone of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as machine learningit wasnt until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.

Machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning works.

The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning worksas well as how it doesnt.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.If you think about it long enough, this makes sense. When we look at a picture of someone, our brains unconsciously estimate how likely it is that we have seen their face before. When we drive to the store, we estimate which route is most likely to get us there the fastest. When we play a board game, we estimate which move is most likely to lead to victory. Recognizing someone, planning a trip, plotting a strategyeach of these tasks demonstrate intelligence. But rather than hinging primarily on our ability to reason abstractly or think grand thoughts, they depend first and foremost on our ability to accurately assess how likely something is. We just dont always realize that thats what were doing.

Back in the 1950s, though, McCarthy and his colleagues did realize it. And they understood something else too: Computers should be very good at computing probabilities. Transistors had only just been invented, and had yet to fully supplant vacuum tube technology. But it was clear even then that with enough data, digital computers would be ideal for estimating a given probability. Unfortunately for the first AI researchers, their timing was a bit off. But their intuition was spot onand much of what we now know as AI is owed to it. When Facebook recognizes your face in a photo, or Amazon Echo understands your question, theyre relying on an insight that is over sixty years old.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.

The machine learning algorithm that Facebook, Google, and others all use is something called a deep neural network. Building on the prior work of Warren McCullough and Walter Pitts, Frank Rosenblatt coded one of the first working neural networks in the late 1950s. Although todays neural networks are a bit more complex, the main idea is still the same: The best way to estimate a given probability is to break the problem down into discrete, bite-sized chunks of information, or what McCullough and Pitts termed a neuron. Their hunch was that if you linked a bunch of neurons together in the right way, loosely akin to how neurons are linked in the brain, then you should be able to build models that can learn a variety of tasks.

To get a feel for how neural networks work, imagine you wanted to build an algorithm to detect whether an image contained a human face. A basic deep neural network would have several layers of thousands of neurons each. In the first layer, each neuron might learn to look for one basic shape, like a curve or a line. In the second layer, each neuron would look at the first layer, and learn to see whether the lines and curves it detects ever make up more advanced shapes, like a corner or a circle. In the third layer, neurons would look for even more advanced patterns, like a dark circle inside a white circle, as happens in the human eye. In the final layer, each neuron would learn to look for still more advanced shapes, such as two eyes and a nose. Based on what the neurons in the final layer say, the algorithm will then estimate how likely it is that an image contains a face. (For an illustration of how deep neural networks learn hierarchical feature representations, see here.)

The magic of deep learning is that the algorithm learns to do all this on its own. The only thing a researcher does is feed the algorithm a bunch of images and specify a few key parameters, like how many layers to use and how many neurons should be in each layer, and the algorithm does the rest. At each pass through the data, the algorithm makes an educated guess about what type of information each neuron should look for, and then updates each guess based on how well it works. As the algorithm does this over and over, eventually it learns what information to look for, and in what order, to best estimate, say, how likely an image is to contain a face.

Whats remarkable about deep learning is just how flexible it is. Although there are other prominent machine learning algorithms tooalbeit with clunkier names, like gradient boosting machinesnone are nearly so effective across nearly so many domains. With enough data, deep neural networks will almost always do the best job at estimating how likely something is. As a result, theyre often also the best at mimicking intelligence too.

Yet as with machine learning more generally, deep neural networks are not without limitations. To build their models, machine learning algorithms rely entirely on training data, which means both that they will reproduce the biases in that data, and that they will struggle with cases that are not found in that data. Further, machine learning algorithms can also be gamed. If an algorithm is reverse engineered, it can be deliberately tricked into thinking that, say, a stop sign is actually a person. Some of these limitations may be resolved with better data and algorithms, but others may be endemic to statistical modeling.

To glimpse how the strengths and weaknesses of AI will play out in the real-world, it is necessary to describe the current state of the art across a variety of intelligent tasks. Below, I look at the situation in regard to speech recognition, image recognition, robotics, and reasoning in general.

Ever since digital computers were invented, linguists and computer scientists have sought to use them to recognize speech and text. Known as natural language processing, or NLP, the field once focused on hardwiring syntax and grammar into code. However, over the past several decades, machine learning has largely surpassed rule-based systems, thanks to everything from support vector machines to hidden markov models to, most recently, deep learning. Apples Siri, Amazons Alexa, and Googles Duplex all rely heavily on deep learning to recognize speech or text, and represent the cutting-edge of the field.

When several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

The specific deep learning algorithms at play have varied somewhat. Recurrent neural networks powered many of the initial deep learning breakthroughs, while hierarchical attention networks are responsible for more recent ones. What they all share in common, though, is that the higher levels of a deep learning network effectively learn grammar and syntax on their own. In fact, when several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

Yet for all the success of deep learning at speech recognition, key limitations remain. The most important is that because deep neural networks only ever build probabilistic models, they dont understand language in the way humans do; they can recognize that the sequence of letters k-i-n-g and q-u-e-e-n are statistically related, but they have no innate understanding of what either word means, much less the broader concepts of royalty and gender. As a result, there is likely to be a ceiling to how intelligent speech recognition systems based on deep learning and other probabilistic models can ever be. If we ever build an AI like the one in the movie Her, which was capable of genuine human relationships, it will almost certainly take a breakthrough well beyond what a deep neural network can deliver.

When Rosenblatt first implemented his neural network in 1958, he initially set it loose onimages of dogs and cats. AI researchers have been focused on tackling image recognition ever since. By necessity, much of that time was spent devising algorithms that could detect pre-specified shapes in an image, like edges and polyhedrons, using the limited processing power of early computers. Thanks to modern hardware, however, the field of computer vision is now dominated by deep learning instead. When a Tesla drives safely in autopilot mode, or when Googles new augmented-reality microscope detects cancer in real-time, its because of a deep learning algorithm.

A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.

Convolutional neural networks, or CNNs, are the variant of deep learning most responsible for recent advances in computer vision. Developed by Yann LeCun and others, CNNs dont try to understand an entire image all at once, but instead scan it in localized regions, much the way a visual cortex does. LeCuns early CNNs were used to recognize handwritten numbers, but today the most advanced CNNs, such as capsule networks, can recognize complex three-dimensional objects from multiple angles, even those not represented in training data. Meanwhile, generative adversarial networks, the algorithm behind deep fake videos, typically use CNNs not to recognize specific objects in an image, but instead to generate them.

As with speech recognition, cutting-edge image recognition algorithms are not without drawbacks. Most importantly, just as all that NLP algorithms learn are statistical relationships between words, all that computer vision algorithms learn are statistical relationships between pixels. As a result, they can be relatively brittle. A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.

What makes our intelligence so powerful is not just that we can understand the world, but that we can interact with it. The same will be true for machines. Computers that can learn to recognize sights and sounds are one thing; those that can learn to identify an object as well as how to manipulate it are another altogether. Yet if image and speech recognition are difficult challenges, touch and motor control are far more so. For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

The reason: Picking up an object like a shirt isnt just one task, but several. First you need to recognize a shirt as a shirt. Then you need to estimate how heavy it is, how its mass is distributed, and how much friction its surface has. Based on those guesses, then you need to estimate where to grasp the shirt and how much force to apply at each point of your grip, a task made all the more challenging because the shirts shape and distribution of mass will change as you lift it up. A human does this trivially and easily. But for a computer, the uncertainty in any of those calculations compounds across all of them, making it an exceedingly difficult task.

Initially, programmers tried to solve the problem by writing programs that instructed robotic arms how to carry out each task step by step. However, just as rule-based NLP cant account for all possible permutations of language, there also is no way for rule-based robotics to run through all the possible permutations of how an object might be grasped. By the 1980s, it became increasingly clear that robots would need to learn about the world on their own and develop their own intuitions about how to interact with it. Otherwise, there was no way they would be able to reliably complete basic maneuvers like identifying an object, moving toward it, and picking it up.

The current state of the art is something called deep reinforcement learning. As a crude shorthand, you can think of reinforcement learning as trial and error. If a robotic arm tries a new way of picking up an object and succeeds, it rewards itself; if it drops the object, it punishes itself. The more the arm attempts its task, the better it gets at learning good rules of thumb for how to complete it. Coupled with modern computing, deep reinforcement learning has shown enormous promise. For instance, by simulating a variety of robotic hands across thousands of servers, OpenAI recently taught a real robotic hand how to manipulate a cube marked with letters.

For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

Compared with prior research, OpenAIs breakthrough is tremendously impressive. Yet it also shows the limitations of the field. The hand OpenAI built didnt actually feel the cube at all, but instead relied on a camera. For an object like a cube, which doesnt change shape and can be easily simulated in virtual environments, such an approach can work well. But ultimately, robots will need to rely on more than just eyes. Machines with the dexterity and fine motor skills of a human are still a ways away.

When Arthur Samuels coined the term machine learning, he wasnt researching image or speech recognition, nor was he working on robots. Instead, Samuels was tackling one of his favorite pastimes: checkers. Since the game had far too many potential board moves for a rule-based algorithm to encode them all, Samuels devised an algorithm that could teach itself to efficiently look several moves ahead. The algorithm was noteworthy for working at all, much less being competitive with other humans. But it also anticipated the astonishing breakthroughs of more recent algorithms like AlphaGo and AlphaGo Zero, which have surpassed all human players at Go, widely regarded as the most intellectually demanding board game in the world.

As with robotics, the best strategic AI relies on deep reinforcement learning. In fact, the algorithm that OpenAI used to power its robotic hand also formed the core of its algorithm for playing Dota 2, a multi-player video game. Although motor control and gameplay may seem very different, both involve the same process: making a sequence of moves over time, and then evaluating whether they led to success or failure. Trial and error, it turns out, is as useful for learning to reason about a game as it is for manipulating a cube.

Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear.

From Samuels on, the success of computers at board games has posed a puzzle to AI optimists and pessimists alike. If a computer can beat a human at a strategic game like chess, how much can we infer about its ability to reason strategically in other environments? For a long time, the answer was, very little. After all, most board games involve a single player on each side, each with full information about the game, and a clearly preferred outcome. Yet most strategic thinking involves cases where there are multiple players on each side, most or all players have only limited information about what is happening, and the preferred outcome is not clear. For all of AlphaGos brilliance, youll note that Google didnt then promote it to CEO, a role that is inherently collaborative and requires a knack for making decisions with incomplete information.

Fortunately, reinforcement learning researchers have recently made progress on both of those fronts. One team outperformed human players at Texas Hold Em, a poker game where making the most of limited information is key. Meanwhile, OpenAIs Dota 2 player, which coupled reinforcement learning with whats called a Long Short-Term Memory (LSTM) algorithm, has made headlines for learning how to coordinate the behavior of five separate bots so well that they were able to beat a team of professional Dota 2 players. As the algorithms improve, humans will likely have a lot to learn about optimal strategies for cooperation, especially in information-poor environments.This kind of information would be especially valuable for commanders in military settings, who sometimes have to make decisions without having comprehensive information.

Yet theres still one challenge no reinforcement learning algorithm can ever solve. Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear. Should corporate strategy prioritize growth or sustainability? Should U.S. foreign policy prioritize security or economic development? No AI will ever be able to answer higher-order strategic reasoning, because, ultimately, those are moral or political questions rather than empirical ones. The Pentagon may lean more heavily on AI in the years to come, but it wont be taking over the situation room and automating complex tradeoffs any time soon.

From autonomous cars to multiplayer games, machine learning algorithms can now approach or exceed human intelligence across a remarkable number of tasks. The breakout success of deep learning in particular has led to breathless speculation about both the imminent doom of humanity and its impending techno-liberation. Not surprisingly, all the hype has led several luminaries in the field, such as Gary Marcus or Judea Pearl, to caution that machine learning is nowhere near as intelligent as it is being presented, or that perhaps we should defer our deepest hopes and fears about AI until it is based on more than mere statistical correlations. Even Geoffrey Hinton, a researcher at Google and one of the godfathers of modern neural networks, has suggested that deep learning alone is unlikely to deliver the level of competence many AI evangelists envision.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics? If all of it can be, then machine learning may well be all we need to get to a true artificial general intelligence. But its very unclear whether thats the case. As far back as 1969, when Marvin Minsky and Seymour Papert famously argued that neural networks had fundamental limitations, even leading experts in AI have expressed skepticism that machine learning would be enough. Modern skeptics like Marcus and Pearl are only writing the latest chapter in a much older book. And its hard not to find their doubts at least somewhat compelling. The path forward from the deep learning of today, which can mistake a rifle for a helicopter, is by no means obvious.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics?

Yet the debate over machine learnings long-term ceiling is to some extent beside the point. Even if all research on machine learning were to cease, the state-of-the-art algorithms of today would still have an unprecedented impact. The advances that have already been made in computer vision, speech recognition, robotics, and reasoning will be enough to dramatically reshape our world. Just as happened in the so-called Cambrian explosion, when animals simultaneously evolved the ability to see, hear, and move, the coming decade will see an explosion in applications that combine the ability to recognize what is happening in the world with the ability to move and interact with it. Those applications will transform the global economy and politics in ways we can scarcely imagine today. Policymakers need not wring their hands just yet about how intelligent machine learning may one day become. They will have their hands full responding to how intelligent it already is.

Go here to see the original:
What is machine learning? - Brookings

Why 2020 will be the Year of Automated Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

As the fuel that powers their ongoing digital transformation efforts, businesses everywhere are looking for ways to derive as much insight as possible from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, in turn, led to a call for more data scientists proficient with the latest artificial intelligence (AI) and machine learning (ML) tools.

But such highly-skilled data scientists are expensive and in short supply. In fact, theyre such a precious resource that the phenomenon of the citizen data scientist has recently arisen to help close the skills gap. A complementary role, rather than a direct replacement, citizen data scientists lack specific advanced data science expertise. However, they are capable of generating models using state-of-the-art diagnostic and predictive analytics. And this capability is partly due to the advent of accessible new technologies such as automated machine learning (AutoML) that now automate many of the tasks once performed by data scientists.

Algorithms and automation

According to a recent Harvard Business Review article, Organisations have shifted towards amplifying predictive power by coupling big data with complex automated machine learning. AutoML, which uses machine learning to generate better machine learning, is advertised as affording opportunities to democratise machine learning by allowing firms with limited data science expertise to develop analytical pipelines capable of solving sophisticated business problems.

Comprising a set of algorithms that automate the writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By way of illustration, a standard ML pipeline is made up of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. But the considerable expertise and time it takes to implement these steps means theres a high barrier to entry.

AutoML removes some of these constraints. Not only does it significantly reduce the time it would typically take to implement an ML process under human supervision, it can also often improve the accuracy of the model in comparison to hand-crafted models, trained and deployed by humans. In doing so, it offers organisations a gateway into ML, as well as freeing up the time of ML engineers and data practitioners, allowing them to focus on higher-order challenges.

SEE ALSO:

Overcoming scalability problems

The trend for combining ML with Big Data for advanced data analytics began back in 2012, when deep learning became the dominant approach to solving ML problems. This approach heralded the generation of a wealth of new software, tooling, and techniques that altered both the workload and the workflow associated with ML on a large scale. Entirely new ML toolsets, such as TensorFlow and PyTorch were created, and people increasingly began to engage more with graphics processing units (GPUs) to accelerate their work.

Until this point, companies efforts had been hindered by the scalability problems associated with running ML algorithms on huge datasets. Now, though, they were able to overcome these issues. By quickly developing sophisticated internal tooling capable of building world-class AI applications, the BigTech powerhouses soon overtook their Fortune 500 peers when it came to realising the benefits of smarter data-driven decision-making and applications.

Insight, innovation and data-driven decisions

AutoML represents the next stage in MLs evolution, promising to help non-tech companies access the capabilities they need to quickly and cheaply build ML applications.

In 2018, for example, Google launched its Cloud AutoML. Based on Neural Architecture Search (NAS) and transfer learning, it was described by Google executives as having the potential to make AI experts even more productive, advance new fields in AI, and help less-skilled engineers build powerful AI systems they previously only dreamed of.

The one downside to Googles AutoML is that its a proprietary algorithm. There are, however, a number of alternative open-source AutoML libraries such as AutoKeras, developed by researchers at Texas University and used to power the NAS algorithm.

Technological breakthroughs such as these have given companies the capability to easily build production-ready models without the need for expensive human resources. By leveraging AI, ML, and deep learning capabilities, AutoML gives businesses across all industries the opportunity to benefit from data-driven applications powered by statistical models - even when advanced data science expertise is scarce.

With organisations increasingly reliant on civilian data scientists, 2020 is likely to be the year that enterprise adoption of AutoML will start to become mainstream. Its ease of access will compel business leaders to finally open the black box of ML, thereby elevating their knowledge of its processes and capabilities. AI and ML tools and practices will become ever more ingrained in businesses everyday thinking and operations as they become more empowered to identify those projects whose invaluable insight will drive better decision-making and innovation.

By Senthil Ravindran, EVP and global head of cloud transformation and digital innovation, Virtusa

View original post here:
Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.

Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.

Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.

Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.

Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.

Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.

We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.

Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.

Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.

Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.

The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.

Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.

Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.

As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.

The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.

comments

More:
Machine Learning: Real-life applications and it's significance in Data Science - Techstory

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – Yahoo Finance

Highlights:

Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.

It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.

SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.

The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.

Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.

In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.

Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to http://www.inspursystems.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200221005123/en/

Contacts

Media Fiona LiuLiuxuan01@inspur.com

View original post here:
Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - Yahoo Finance