First in MC: Moves afoot on encrypted calls between House, Senate – Politico

With help from Eric Geller, Martin Matishak and Doug Palmer

Programming announcement: This 10 a.m. version of Morning Cybersecurity will end daily publication on July 10 and move to a week-ahead style newsletter that publishes on Monday mornings. For information on how you can continue to receive daily policy content, as well as information for current POLITICO Pro subscribers, please visit our website.

MC exclusive: House and Senate officials say theyre making moves to enable encrypted calls from one side of the Capitol to the other.

A House panel will examine Covid-19 cybercrime, from the increase in number of attacks to whos responsible.

The White House is resisting the creation of a national cyber director, the most visible recommendation of the Cyberspace Solarium Commission, one of its co-chairs said.

HAPPY TUESDAY and welcome to Morning Cybersecurity! Most headlines feel very strange these days. Send your thoughts, feedback and especially tips to [emailprotected]. Be sure to follow @POLITICOPro and @MorningCybersec. Full team info below.

Get the free POLITICO news app for the critical updates you need. Breaking news, analysis, videos, and podcasts, right at your fingertips. Download for iOS and Android.

FIRST IN MC: CONGRESSIONAL CALL ENCRYPTION The Senate sergeant at arms and House chief administrative officer are taking steps to encrypt cross-Capitol calls, they said in a letter to lawmakers on Monday. Calls made between Senate Voice over Internet Protocol phones are encrypted, and calls made between House VOIP phones are encrypted, but calls between the two chambers are not.

Modernization of the Senates VOIP system is ongoing and may be necessary to allow for encrypted cross-Capitol calls, the officials wrote to a long list of lawmakers from both chambers and parties who signed a letter last month, led by Sen. Ron Wyden (D-Ore.) and Rep. Anna Eshoo (D-Calif.), asking for such protected voice communications. The House and Senate are examining how to implement the calls, the officials added.

"To further explore the feasibility of encrypting calls between the two bodies, the Senate and the House will commission an independent third-party assessment of the two current infrastructures providing a recommendation to include technical guidance, industry best practice, and risks and impact considerations to ensure encrypted inter-chamber voice traffic," wrote Sergeant at Arms Michael Stenger and Chief Administrative Officer Philip Kiko. "The Senate and the House will also form a technical working group comprised of staff from both bodies to review these recommendations and provide a detailed plan regarding the most efficient and cost-effective technical solution."

Congress is an obvious target for foreign intelligence services, so we are highly pleased to see that the Senate and House are moving toward securing calls between the chambers with strong encryption. Secure, backdoor-free encryption is essential, including to protect Congress against foreign threats, Wyden and Eshoo said in a statement to MC.

HILL ATTENTION ON CORONAVIRUS CYBERCRIMINALS The House Financial Services national security subcommittee holds a hearing today on Covid-19 cyber threats, following a similar virtual roundtable in May. A committee aide said the hearing is expected to be bipartisan and will likely focus on examining the increased volume of cyber threats exploiting the Covid-19 crisis, analyzing what kind of schemes and methods cyber experts are detecting, and discussing whos perpetuating the attacks on Americans and how. Heres a reminder of the witnesses and legislation in play.

WHITE HOUSE AGAINST NATIONAL CYBER CHIEF The Trump administration opposes a Cyberspace Solarium Commission proposal to create a national cyber director, Sen. Angus King (I-Maine) said Monday. The White House is resistant to it, King, one of the commissions co-chairs, said during a New America webinar. The national security adviser [Robert OBrien], I suspect, doesnt like it. No national security adviser would, because its some diminution of their authority. But I think its one of the most important recommendations we have.

A senior administration official confirmed the executive branchs stance. To best protect the American people in the most effective manner, the administration is opposed to the creation of a National Cyber Director because, among other things, it would limit the authority of the president to select and appoint his own advisers, create conflicting layers of authority, and inevitably create budgetary inefficiencies, the official told Martin in an email.

The Senate Armed Services Committee last week included almost a dozen recommendations from the Solariums report in its draft of the fiscal 2021 defense policy bill but stopped short of creating the office, instead requesting an independent assessment on establishing the Senate-confirmed post. That language is literally a placeholder so there can be further discussions with other lawmakers and the administration, according to King. Im really hopeful, Im not going to put a percentage on it, but its so logical, he said, adding success boils down to basically persuading the administration. This isnt about President Trump. This is about any president. This is a favor to the president, giving them someone that they can hold accountable in this area. I think there's a reasonable shot at it.

EYES EMOJI Nearly 60 percent of businesses in the Americas region let employees use their social media accounts to access work resources, and more than 40 percent of corporate cyber defenders consider usernames and passwords to be one of the best ways to limit unauthorized network access, according to a new Thales survey of 300 IT professionals in the U.S. and Brazil. Furthermore, nearly 30 percent of respondents called social media credentials one of the best tools for protecting cloud platforms from intruders, Thales revealed in its 2020 Access Management Index report.

The report wasnt all bad news, however. Ninety-five percent of IT professionals told Thales that their organizations have implemented multi-factor authentication, and 59 percent reported using smart single sign-on solutions. Additionally, 65 percent of respondents said their IT leaders found it easy to convince corporate boards that cybersecurity mattered, up from 44 percent a year ago. The number of respondents who said it was difficult declined from 33 percent a year ago to 16 percent now.

SOC IT TO ME More than 8 in 10 security operations centers are confident in their capacity to detect cyber threats, even though 40 percent still struggle with staff shortages, an Exabeam annual survey out today found. SOC outsourcing has declined in the U.S. from 36 percent to 26 percent, although it's become more common in Europe, among other findings from the report, which polled personnel in the U.S., the U.K., Canada and Australia.

SO IT WILL WIN A LOT OF AWARDS? Based on Georgias primary voting issues last week, Wyden said Monday that the nation could be heading toward an election Chernobyl. The state showed how everything can go wrong, he wrote on Medium. Start with a base of shoddy electronic election equipment and a system that was unprepared for a surge in mail-in ballots, he said. Add a failure in leadership from state election officials, who had no contingency plans for extremely predictable COVID-related complications. And top it all off with Republicans usual affinity for ensuring that Black voters and other people of color face huge hurdles to get to the ballot box. Congress needs to act on election funding and improvements immediately, Wyden argued.

DOE BOSS HEADS TO IDAHO Energy Secretary Dan Brouillette will tour a cyber hub at the Idaho National Laboratory on Thursday. He will see firsthand the Labs new CyberCore Integration Center, a facility that enables partnerships across federal agencies, private industry, and university partners to secure control systems from cyberthreats, the department announced on Monday.

WELCOME TO TWITTER, GEN. NAKASONE NSA and Cyber Command chief Gen. Paul Nakasone made his Twitter premier Monday. I'll be using this platform to speak directly to you about partnerships and engagements in my role as Commander @US_CYBERCOM and Director @NSAgov, he said in his inaugural message. Then, in what perhaps was a nod to the bizarre romance scams where the fraudsters pretended to be him, Nakasone added: You can rest assured this is the only place (besides @NSAgov, @US_CYBERCOM, and my other official social media accounts) that you'll find me.

HUAWEI SLACK From our friends at Morning Trade: The Commerce Department issued a new rule that it said would ensure Huaweis placement on the U.S. entity list does not prevent American companies from contributing to important standards-developing activities despite Huaweis participation in standards-development organizations. The Information Technology Industry Council welcomed the move.

DONT MAIL FETAL PIGS TO YOUR CRITICS For one thing, the retailer may not ship it. For another, you might get indicted for cyberstalking, like six former eBay employees did on Monday. Federal prosecutors charged eBays former head of security and five others with taking part in a bizarre campaign to harass a couple who write and publish an e-commerce newsletter that criticized the company. (The Natick, Mass.-based newsletter isnt named in the indictment but details in the court filings indicate it is eCommerceBytes).

In addition to anonymous, threatening messages, the former employees sent a box of live cockroaches, a funeral wreath and a bloody pig mask to the pair, our colleagues at Morning Tech report. They also tried to send a fetal pig but were thwarted when the company declined to deliver it. In a statement, eBay said it terminated all of the employees involved including the companys former chief of communications after finding out about the cyberstalking. An internal investigation found former eBay CEO Devin Wenig, who stepped down in September, had inappropriate communications but didnt know about or authorize the campaign, the company said.

TWEET OF THE DAY A sobering summary.

Onapsis today released research on Oracle financial software vulnerabilities that would allow attackers to pilfer financial information, modify accounting reports or disrupt a business. Oracle has issued patches for the vulnerabilities in its E-Business Suite.

FBI Director Christopher Wray on Monday announced James Dawson as the special agent in charge of the criminal and cyber division of the Washington field office. He most most recently served in the same office as the special agent in charge of the mission services division.

Amnesty International and Citizen Lab reported on Indian human rights activists targeted by the NSO Groups Pegasus spyware.

Kaspersky produced a report on porn and cyber threats.

Wired: Researchers turned up a pretty big trove of sensitive dating app data.

The New York Times: "A Conspiracy Made in America May Have Been Spread by Russia."

CyberScoop: Hackers are pretending to be a top Taiwan health official to steal sensitive info.

ZDNet: A South African bank has to replace 12 million cards following an employees theft of the master key.

Thats all for today.

Stay in touch with the whole team: Eric Geller ([emailprotected], @ericgeller); Bob King ([emailprotected], @bkingdc); Martin Matishak ([emailprotected], @martinmatishak); Tim Starks ([emailprotected], @timstarks); and Heidi Vogt ([emailprotected], @heidivogt).

View original post here:
First in MC: Moves afoot on encrypted calls between House, Senate - Politico

Zoom to offer end-to-end encryption for all users, trial to begin in July – ETCIO.com

The company said to help prevent abuse, it will only make the feature available to users who provide a verified phone number.

California-based Zoom had originally planned to strengthen encryption only for its paying clients.

The company has attracted millions of free and paying customers as the coronavirus outbreak forced more people to work from home, but has faced criticism over privacy and security issues.

Zoom also came under fire for failing to disclose that its service was not fully end-to-end encrypted.

Taiwan and Germany have placed restrictions on Zoom's use, while Elon Musk's SpaceX banned the app over security concerns. The company also faces a class-action lawsuit.

The company hired former chief security officer at Facebook Alex Stamos in April to help bolster its security and rolled out some major upgrades.

See the article here:
Zoom to offer end-to-end encryption for all users, trial to begin in July - ETCIO.com

WiFi Networks Turned Targets In This Pocket Game – Hackaday

Looking for a way to make his warwalking sessions a bit more interactive, [Roni Bandini] has come up with an interesting way to gamify the discovery of new WiFi networks. Using a Heltec WiFi Kit 8, which integrates an OLED screen and ESP8266, this pocket-sized device picks up wireless networks and uses their signal strength and encryption type as elements of the game.

After selecting which network they want to play against, a target is placed on the screen. The distance between the target and the player is determined by signal strength, and how much damage the target can take correlates to how strong its encryption is. As you can see in the video after the break, gameplay is a bit reminiscent of Scorched Earth, where the player needs to adjust the angle of their artillery to hit distant targets.

The Heltec board is attached to a 3D printed front panel, which fits neatly into an Altoids tin. The controls consist of a button and a potentiometer, and with the addition of a battery pack salvaged from an old cell phone, this little device is ready to do battle wherever you roam.

While this is just a fun diversion for the time being, [Roni] says it wouldnt take much to actual log networks to a file and generate some statistics about their strength and encryption type. If the idea of a portable WiFi scanning companion seems interesting, you should definitely check out the Pwnagotchi project.

Read more from the original source:
WiFi Networks Turned Targets In This Pocket Game - Hackaday

Effects of the Alice Preemption Test on Machine Learning Algorithms – IPWatchdog.com

According to the approach embraced by McRO and BASCOM, while machine learning algorithms bringing a slight improvement can pass the eligibility test, algorithms paving the way for a whole new technology can be excluded from the benefits of patent protection simply because there are no alternatives.

In the past decade or so, humanity has gone through drastic changes as Artificial intelligence (AI) technologies such as recommendation systems and voice assistants have seeped into every facet of our lives. Whereas the number of patent applications for AI inventions skyrocketed, almost a third of these applications are rejected by the U.S. Patent and Trademark Office (USPTO) and the majority of these rejections are due to the claimed invention being ineligible subject matter.

The inventive concept may be attributed to different components of machine learning technologies, such as using a new algorithm, feeding more data, or using a new hardware component. However, this article will exclusively focus on the inventions achieved by Machine Learning (M.L.) algorithms and the effect of the preemption test adopted by U.S. courts on the patent-eligibility of such algorithms.

Since the Alice decision, the U.S. courts have adopted different views related to the role of the preemption test in eligibility analysis. While some courts have ruled that lack of preemption of abstract ideas does not make an invention patent-eligible [Ariosa Diagnostics Inc. v. Sequenom Inc.], others have not referred to it at all in their patent eligibility analysis. [Enfish LLC v. Microsoft Corp., 822 F.3d 1327]

Contrary to those examples, recent cases from Federal Courts have used the preemption test as the primary guidance to decide patent eligibility.

In McRO, the Federal Circuit ruled that the algorithms in the patent application prevent pre-emption of all processes for achieving automated lip-synchronization of 3-D characters. The court based this conclusion on the evidence of availability of an alternative set of rules to achieve the automation process other than the patented method. It held that the patent was directed to a specific structure to automate the synchronization and did not preempt the use of all of the rules for this method given that different sets of rules to achieve the same automated synchronization could be implemented by others.

Similarly, The Court in BASCOM ruled that the claims were patent eligible because they recited a specific, discrete implementation of the abstract idea of filtering contentand they do not preempt all possible ways to implement the image-filtering technology.

The analysis of the McRO and BASCOM cases reveals two important principles for the preemption analysis:

Machine learning can be defined as a mechanism which searches for patterns and which feeds intelligence into a machine so that it can learn from its own experience without explicit programming. Although the common belief is that data is the most important component in machine learning technologies, machine learning algorithms are equally important to proper functioning of these technologies and their importance cannot be understated.

Therefore, inventive concepts enabled by new algorithms can be vital to the effective functioning of machine learning systemsenabling new capabilities, making systems faster or more energy efficient are examples of this. These inventions are likely to be the subject of patent applications. However, the preemption test adopted by courts in the above-mentioned cases may lead to certain types of machine learning algorithms being held ineligible subject matter. Below are some possible scenarios.

The first situation relates to new capabilities enabled by M.L. algorithms. When a new machine learning algorithm adds a new capability or enables the implementation of a process, such as image recognition, for the first time, preemption concerns will likely arise. If the patented algorithm is indispensable for the implementation of that technology, it may be held ineligible based on the McRO case. This is because there are no other alternative means to use this technology and others would be prevented from using this basic tool for further development.

For example, a M.L. algorithm which enabled the lane detection capability in driverless cars may be a standard/must-use algorithm in the implementation of driverless cars that the court may deem patent ineligible for having preemptive effects. This algorithm clearly equips the computer vision technology with a new capability, namely, the capability to detect boundaries of road lanes. Implementation of this new feature on driverless cars would not pass the Alice test because a car is a generic tool, like a computer, and even limiting it to a specific application may not be sufficient because it will preempt all uses in this field.

Should the guidance of McRO and BASCOM be followed, algorithms that add new capabilities and features may be excluded from patent protection simply because there are no other available alternatives to these algorithms to implement the new capabilities. These algorithms use may be so indispensable for the implementation of that technology that they are deemed to create preemptive effects.

Secondly, M.L. algorithms which are revolutionary may also face eligibility challenges.

The history of how deep neural networks have developed will be explained to demonstrate how highly-innovative algorithms may be stripped of patent protection because of the preemption test embraced by McRO and subsequent case law.

Deep Belief Networks (DBNs) is a type of Artificial Neural Networks (ANNs). The ANNs were trained with a back-propagation algorithm, which adjusts weights by propagating the outputerror backwardsthrough the network However, the problem with the ANNs was that as the depth was increased by adding more layers, the error vanished to zero and this severely affected the overall performance, resulting in less accuracy.

From the early 2000s, there has been a resurgence in the field of ANNs owing to two major developments: increased processing power and more efficient training algorithms which made trainingdeep architecturesfeasible. The ground-breaking algorithm which enabled the further development of ANNs in general and DBNs in particular was Hintons greedy training algorithm.

Thanks to this new algorithm, DBNs has been applicable to solve a variety of problems that were the roadblock before the use of new technologies, such as image processing,natural language processing, automatic speech recognition, andfeature extractionand reduction.

As can be seen, the Hiltons fast learning algorithm revolutionized the field of machine learning because it made the learning easier and, as a result, technologies such as image processing and speech recognition have gone mainstream.

If patented and challenged at court, Hiltons algorithm would likely be invalidated considering previous case law. In McRO, the court reasoned that the algorithm at issue should not be invalidated because the use of a set of rules within the algorithm is not a must and other methods can be developed and used. Hiltons algorithm will inevitably preempt some AI developers from engaging with further development of DBNs technologies because this algorithm is a base algorithm, which made the DBNs plausible to implement so that it may be considered as a must. Hiltons algorithm enabled the implementation of image recognition technologies and some may argue based on McRO and Enfish that Hiltons algorithm patent would be preempting because it is impossible to implement image recognition technologies without this algorithm.

Even if an algorithm is a must-use for a technology, there is no reason to exclude it from patent protection. Patent law inevitably forecloses certain areas from further development by granting exclusive rights through patents. All patents foreclose competitors to some extent as a natural consequence of exclusive rights.

As stated in the Mayo judgment, exclusive rights provided by patents can impede the flow of information that might permit, indeed spur, invention, by, for example, raising the price of using the patented ideas once created, requiring potential users to conduct costly and time-consuming searches of existing patents and pending patent applications, and requiring the negotiation of complex licensing arrangements.

The exclusive right granted by a patents is only one side of the implicit agreement between the society and the inventor. In exchange for the benefit of the exclusivity, inventors are required to disclose their invention to the public so this knowledge becomes public, available for use in further research and for making new inventions building upon the previous one.

If inventors turn to trade secrets to protect their inventions due to the hostile approach of patent law to algorithmic inventions, the knowledge base in this field will narrow, making it harder to build upon previous technology. This may lead to the slow-down and even possible death of innovation in this industry.

The fact that an algorithm is a must-use, should not lead to the conclusion that it cannot be patented. Patent rights may even be granted for processes which have primary and even sole utility in research. Literally, a microscope is a basic tool for scientific work, but surely no one would assert that a new type of microscope lay beyond the scope of the patent system. Even if such a microscope is used widely and it is indispensable, it can still be given patent protection.

According to the approach embraced by McRO and BASCOM, while M.L. algorithms bringing a slight improvement, such as a higher accuracy and higher speed, can pass the eligibility test, algorithms paving the way for a whole new technology can be excluded from the benefits of patent protection simply because there are no alternatives to implement that revolutionary technology.

Considering that the goal of most AI inventions is to equip computers with new capabilities or bring qualitative improvements to abilities such as to see or to hear or even to make informed judgments without being fed complete information, most AI inventions would have the higher likelihood of being held patent ineligible. Applying this preemption test to M.L. algorithms would put such M.L. algorithms outside of patent protection.

Thus, a M.L. algorithm which increases accuracy by 1% may be eligible, while a ground-breaking M.L. algorithm which is a must-use because it covers all uses in that field may be excluded from patent protection. This would result in rewarding slight improvements with a patent but disregarding highly innovative and ground-breaking M.L. algorithms. Such a consequence is undesirable for the patent system.

This also may result in deterring the AI industry from bringing innovation in fundamental areas. As an undesired consequence, innovation efforts may shift to small improvements instead of innovations solving more complex problems.

Image Source:Author: nils.ackermann.gmail.comImage ID:102390038

More:
Effects of the Alice Preemption Test on Machine Learning Algorithms - IPWatchdog.com

Googles latest experiment is Keen, an automated, machine-learning based version of Pinterest – TechCrunch

A new project called Keen is launching today from Googles in-house incubator for new ideas, Area 120, to help users track their interests. The app is like a modern rethinking of the Google Alerts service, which allows users to monitor the web for specific content. Except instead of sending emails about new Google Search results, Keen leverages a combination of machine learning techniques and human collaboration to help users curate content around a topic.

Each individual area of interest is called a keen a word often used to reference someone with an intellectual quickness.

The idea for the project came about after co-founder C.J. Adams realized he was spending too much time on his phone mindlessly browsing feeds and images to fill his downtime. He realized that time could be better spent learning more about a topic he was interested in perhaps something he always wanted to research more or a skill he wanted to learn.

To explore this idea, he and four colleagues at Google worked in collaboration with the companys People and AI Research (PAIR) team, which focuses on human-centered machine learning, to create what has now become Keen.

To use Keen, which is available both on the web and on Android, you first sign in with your Google account and enter in a topic you want to research. This could be something like learning to bake bread, bird watching or learning about typography, suggests Adams in an announcement about the new project.

Keen may suggest additional topics related to your interest. For example, type in dog training and Keen could suggest dog training classes, dog training books, dog training tricks, dog training videos and so on. Click on the suggestions you want to track and your keen is created.

When you return to the keen, youll find a pinboard of images linking to web content that matches your interests. In the dog training example, Keen found articles and YouTube videos, blog posts featuring curated lists of resources, an Amazon link to dog training treats and more.

For every collection, the service uses Google Search and machine learning to help discover more content related to the given interest. The more you add to a keen and organize it, the better these recommendations become.

Its like an automated version of Pinterest, in fact.

Once a keen is created, you can then optionally add to the collection, remove items you dont want and share the Keen with others to allow them to also add content. The resulting collection can be either public or private. Keen can also email you alerts when new content is available.

Google, to some extent, already uses similar techniques to power its news feed in the Google app. The feed, in that case, uses a combination of items from your Google Search history and topics you explicitly follow to find news and information it can deliver to you directly on the Google apps home screen. Keen, however, isnt tapping into your search history. Its only pulling content based on interests you directly input.

And unlike the news feed, a keen isnt necessarily focused only on recent items. Any sort of informative, helpful information about the topic can be returned. This can include relevant websites, events, videos and even products.

But as a Google project and one that asks you to authenticate with your Google login the data it collects is shared with Google. Keen, like anything else at Google, is governed by the companys privacy policy.

Though Keen today is a small project inside a big company, it represents another step toward the continued personalization of the web. Tech companies long since realized that connecting users with more of the content that interests them increases their engagement, session length, retention and their positive sentiment for the service in question.

But personalization, unchecked, limits users exposure to new information or dissenting opinions. It narrows a persons worldview. It creates filter bubbles and echo chambers. Algorithmic-based recommendations can send users searching for fringe content further down dangerous rabbit holes, even radicalizing them over time. And in extreme cases, radicalized individuals become terrorists.

Keen would be a better idea if it were pairing machine-learning with topical experts. But it doesnt add a layer of human expertise on top of its tech, beyond those friends and family you specifically invite to collaborate, if you even choose to. That leaves the system wanting for better human editorial curation, and perhaps the need for a narrower focus to start.

Visit link:
Googles latest experiment is Keen, an automated, machine-learning based version of Pinterest - TechCrunch

Deploying Machine Learning Has Never Been This Easy – Analytics India Magazine

According to PwC, AIs potential global economic impact will reach USD 15.7 trillion by 2030. However, the enterprises who look to deploy AI are often hampered by the lack of time, trust and talent. Especially, with the highly regulated sectors such as healthcare and finance, convincing the customers to imbibe AI methodologies is an uphill task.

Of late, the AI community has seen a sporadic shift in AI adoption with the advent of AutoML tools and introduction of customised hardware to cater to the needs of the algorithms. One of the most widely used AutoML tools in the industry is H2O Driverless AI. And, when it comes to hardware Intel has been consistently updating its tool stack to meet the high computational demands of the AI workflows.

Now H2O.ai and Intel, two companies who have been spearheading the democratisation of the AI movement, join hands to develop solutions that leverage software and hardware capabilities respectively.

AI and machine-learning workflows are complex and enterprises need more confidence in the validity of their AI models than a typical black-box environment can provide. The inexplicability and the complexity of feature engineering can be daunting to the non-experts. So far AutoML has proven to be the one stop solution to all these problems. These tools have alleviated the challenges by providing automated workflows, code ready deployable models and many more.

H2O.ai especially, has pioneered in the AutoML segment. They have developed an open source, distributed in-memory machine learning platform with linear scalability that includes a module called H2OAutoML, which can be used for automating the machine learning workflow, that includes automatic training and tuning of many models within a user-specified time-limit.

Whereas, H2O.ais flagship product Driverless AI can be used to fully automate some of the most challenging and productive tasks in applied data science such as feature engineering, model tuning, model ensembling and model deployment.

But, for these AI based tools to work seamlessly, they need the backing of hardware that is dedicated to handle the computational intensity of machine learning operations.

Intel has been at the forefront of digital revolution for over half a century. Today, Intel flaunts a wide range of technologies, including its Xeon Scalable processors, Optane Solid State Drives and optimized Intel software libraries that bring in a much needed mix of enhanced performance, AI inference, network functions, persistent memory bandwidth, and security.

Integrating H2O.ais software portfolio with hardware and software technologies from Intel has resulted in solutions that can handle almost all the woes of an AI enterprise from automated workflows to explainability to production ready code that can be deployed anywhere.

For example, H2O Driverless AI, an automatic machine-learning platform enables data science experts and beginners to streamline their AI tasks within minutes that usually take months. Today, more than 18,000 companies use open source H2O in mission-critical use cases for finance, insurance, healthcare, retail, telco, sales, and marketing.

The software capabilities of H2O.ai combined with hardware infrastructure of Intel, that includes 2nd Generation Xeon Scalable processors, Optane Solid State Drives and Ethernet Network Adapters, can empower enterprises to optimize performance and accelerate deployment.

Enterprises that are looking for increasing productivity while increasing the business value of to enjoy the competitive advantages of AI innovation no longer have to wait thanks to hardware backed AutoML solutions.

comments

Visit link:
Deploying Machine Learning Has Never Been This Easy - Analytics India Magazine

Coronavirus will finally give artificial intelligence its moment – San Antonio Express-News

For years, artificial intelligence seemed on the cusp of becoming the next big thing in technology - but the reality never matched the hype. Now, the changes caused by the covid-19 pandemic may mean AI's moment is finally upon us.

Over the past couple of months, many technology executives have shared a refrain: Companies need to rejigger their operations for a remote-working world. That's why they have dramatically increased their spending on powerful cloud-computing technologies and migrated more of their work and communications online.

With fewer people in the office, these changes will certainly help companies run more nimbly and reliably. But the centralization of more corporate data in the cloud is also precisely what's needed for companies to develop the AI capabilities - from better predictive algorithms to increased robotic automation - we've been hearing about for so long. If business leaders invest aggressively in the right areas, it could be a pivotal moment for the future of innovation.

To understand all the fuss around artificial intelligence, some quick background might be useful: AI is based on computer science research that looks at how to imitate the workings of human intelligence. It uses powerful algorithms that digest large amounts of data to identify patterns. These can be used to anticipate, say, what consumers will buy next or offer other important insights. Machine learning - essentially, algorithms that can improve at recognizing patterns on their own, without being explicitly programmed to do so - is one subset of AI that can enable applications like providing real-time protection against fraudulent financial transactions.

Historically, AI hasn't fully lived up to its hype. We're still a ways off from being able to have natural, life-like conversations with a computer, or getting truly safe self-driving cars. Even when it comes to improving less advanced algorithms, researchers have struggled with limited datasets and a lack of scaleable computing power.

Still, Silicon Valley's AI-startup ecosystem has been vibrant. Crunchbase says there are 5,751 private-held AI companies in the U.S. and that the industry received $17.4 billion in new funding last year. International Data Corporation (IDC) recently forecast that global AI spending will rise to $96.3 billion in 2023 from $38.4 billion in 2019. A Gartner survey of chief information officers and IT leaders, conducted in February, found that enterprises are projecting to double their number of AI projects, with over 40% planning to deploy at least one by the end of 2020.

As the pandemic accelerates the need for AI, these estimates will most likely prove to be understated. Big Tech has already demonstrated how useful AI can be in fighting covid-19. For instance, Amazon.com partnered with researchers to identify vulnerable populations and act as an "early warning" system for future outbreaks. BlueDot, an Amazon Web Services startup customer, used machine learning to sift through massive amounts of online data and anticipate the spread of the virus in China.

Pandemic lockdowns have also affected consumer behavior in ways that will spur AI's growth and development. Take a look at the soaring e-commerce industry: As consumers buy more online to avoid the new risks of shopping in stores, they are giving sellers more data on preferences and shopping habits. Bank of America's internal card-spending data for e-commerce points to rising year-over-year revenue growth rates of 13% for January, 17% for February, 24% for March, 73% for April and 80% for May. The data these transactions generate is a goldmine for retailers and AI companies, allowing them to improve the algorithms that provide personalized recommendations and generate more sales.

The growth in online activity also makes a compelling case for the adoption of virtual customer-service agents. International Business Machines Corporation estimates that only about 20% of companies use such AI-powered technology today. But they predict that almost all enterprises will adopt it in the coming years. By allowing computers to handle the easier questions, human representatives can focus on the more difficult interactions, thereby improving customer service and satisfaction.

Another area of opportunity comes from the increase in remote working. As companies struggle with the challenge of bringing employees back to the office, they may be more receptive to AI-based process automation software, which can handle mundane tasks like data entry. Its ability to read invoices and update databases without human intervention can reduce the need for some types of office work while also improving its accuracy. UiPath, Automation Anywhere and Blue Prism are the three leading vendors in this space, according to Goldman Sachs, accounting for about 36% of the roughly $850 million market last year. More imaginative AI projects are on the horizon. Graphics semiconductor-maker NVIDIA Corporation and luxury automaker BMW Group recently announced a deal where AI-powered logistics robots will be used to manufacture customized vehicles. In mid-May, Facebook said it was working on an AI lifestyle assistant that can recommend clothes or pick out furniture based on your personal taste and the configuration of your room.

As with the mass adoption of any new technology, there will be winners and losers. Among the winners, cloud-computing vendors will thrive as they capture more and more data. According to IDC, Amazon Web Services was number one in infrastructure cloud-computing services, with a 47% market share last year, followed by Microsoft at 13%.

But NVIDIA may be at an even better intersection of cloud and AI tech right now: Its graphic chip technology, once used primarily for video games, has morphed into the preeminent platform for AI applications. NVIDIA also makes the most powerful graphic processing units, so it dominates the AI-chip market used by cloud-computing companies. And it recently launched new data center chips that use its next-generation "Ampere" architecture, providing developers with a step-function increase in machine-learning capabilities.

On the other hand, the legacy vendors that provide computing equipment and software for in-office environments are most at risk of losing out in this technological shift. This category includes server sellers like Hewlett Packard Enterprise Company and router-maker Cisco Systems, Inc.

We must not ignore the more insidious consequences of an AI renaissance, either. There are a lot of ethical hurdles and complications ahead involving job loss, privacy and bias. Any increased automation may lead to job reductions, as software and robots replace tasks performed by humans. As more data becomes centrally stored on the cloud, the risk of larger data breaches will increase. Top-notch security has to become another key area of focus for technology and business executives. They also need to be vigilant in preventing algorithms from discriminating against minority groups, starting with monitoring their current technology and compiling more accurate datasets.

But the upside of greater computing power, better business insights and cost efficiencies from AI is too big to ignore. So long as companies proceed responsibly, years from now, the advances in AI catalyzed by the coronavirus crisis may be one of the silver linings we remember from 2020.

- - -

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Kim is a Bloomberg Opinion columnist covering technology.

Visit link:
Coronavirus will finally give artificial intelligence its moment - San Antonio Express-News

IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML – AiThority

IBMhas joined theSCTEISBE Explorer Initiativeas a member of the artificial intelligence (AI) and machine learning (ML) working group. IBM is the first company from outside the cable telecommunications industry to join Explorer.

IBM will collaborate with subject matter experts from across industries to develop AI and ML standards and best practices. By sharing expertise and insights fostered within their organizations, members will help shape the standards that will enable the wide-spread availability of AI and ML applications.

Recommended AI News:Azure DevSecOps Jumpstart Now Available In The Microsoft Azure Marketplace

Integrating advancements in AI and machine learning with the deployment of agile, open, and secure, software-defined networks will help usher in new innovations, many of which will transform the way we connect, saidSteve Canepa, global industry managing director, telecommunications, media & entertainment for IBM. The industry is going through a dramatic transformation as it prepares for a different marketplace with different demands, and we are energized by this collaboration. As the network becomes a cloud platform, it will help drive innovative data-driven services and applications to bring value to both enterprises and consumers.

SCTEISBE announced the expansion of its award-winning Standards program in lateMarch 2020with the introduction of the Explorer Initiative. As part of the initiative seven new working groups will bring together leaders with diverse backgrounds to develop standards forAI and ML, smart cities, aging in place and telehealth, telemedicine, autonomous transport, extended spectrum (up to 3.0 GHz), and human factors affecting network reliability. Explorer working groups were chosen for their potential to impact telecommunications infrastructure, take advantage of the benefits of cables10G platform,and improve societys ability to cope with natural disasters and health crises like COVID-19.

Recommended AI News:Zilliant Price IQ Is Integrated With Oracle Cloud And Now Available In The Oracle Cloud Marketplace

The COVID-19 pandemic has demonstrated the importance of technology and connectivity to modern society and by many accounts, increased the speed of digital transformation across industries, saidChris Bastian, SCTEISBE senior vice president and CTIO. Explorer will help us turn innovative concepts into reality by giving industry leaders the opportunity to learn from each other, reduce development costs, ensure their connectivity needs are met, and ultimately get to market faster.

Recommended: AiThority Interview With Elie Melois, CTO And Co-Founder At LumApps

Share and Enjoy !

Read the original here:
IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML - AiThority

Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding…

Data Bridge Market Research has recently added a concise research on the Global Machine Learning Chip Market to depict valuable insights related to significant market trends driving the industry. The report features analysis based on key opportunities and challenges confronted by market leaders while highlighting their competitive setting and corporate strategies for the estimated timeline. The development plans, market risks, opportunities and development threats are explained in detail. The CAGR value, technological development, new product launches and Machine Learning Chip Industry competitive structure is elaborated. As per study key players of this market are Google Inc, Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding Company, Intel Corporation, Xilinx, SAMSUNG, Qualcomm Technologies, Inc.,

Click HERE To get SAMPLE COPY OF THIS REPORT (Including Full TOC, Table & Figures) [emailprotected] https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-machine-learning-chip-market

Machine learning chip market is expected to reach USD 72.45 billion by 2027 witnessing market growth with the rate of 40.60% in the forecast period of 2020 to 2027. Data Bridge Market Research report on machine learning chip market provides analysis and insights regarding the various factors expected to be prevalent throughout the forecast period while providing their impacts on the markets growth.

Global Machine Learning Chip Market Dynamics:

Global Machine Learning Chip Market Scope and Market Size

Machine learning chip market is segmented on the basis of chip type, technology and industry vertical. The growth among segments helps you analyse niche pockets of growth and strategies to approach the market and determine your core application areas and the difference in your target markets.

Important Features of the Global Machine Learning Chip Market Report:

1) What all companies are currently profiled in the report?

List of players that are currently profiled in the report- NVIDIA Corporation, Wave Computing, Inc., Graphcore, IBM Corporation, Taiwan Semiconductor Manufacturing Company Limited, Micron Technology, Inc.,

** List of companies mentioned may vary in the final report subject to Name Change / Merger etc.

2) What all regional segmentation covered? Can specific country of interest be added?

Currently, research report gives special attention and focus on following regions:

North America, Europe, Asia-Pacific etc.

** One country of specific interest can be included at no added cost. For inclusion of more regional segment quote may vary.

3) Can inclusion of additional Segmentation / Market breakdown is possible?

Yes, inclusion of additional segmentation / Market breakdown is possible subject to data availability and difficulty of survey. However a detailed requirement needs to be shared with our research before giving final confirmation to client.

** Depending upon the requirement the deliverable time and quote will vary.

Global Machine Learning Chip Market Segmentation:

By Chip Type (GPU, ASIC, FPGA, CPU, Others),

Technology (System-on-Chip, System-in-Package, Multi-Chip Module, Others),

Industry Vertical (Media & Advertising, BFSI, IT & Telecom, Retail, Healthcare, Automotive & Transportation, Others),

Country (U.S., Canada, Mexico, Brazil, Argentina, Rest of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific, Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa) Industry Trends and Forecast to 2027

New Business Strategies, Challenges & Policies are mentioned in Table of Content, Request TOC @ https://www.databridgemarketresearch.com/toc/?dbmr=global-machine-learning-chip-market

Strategic Points Covered in Table of Content of Global Machine Learning Chip Market:

Chapter 1:Introduction, market driving force product Objective of Study and Research Scope Machine Learning Chip market

Chapter 2:Exclusive Summary the basic information of Machine Learning Chip Market.

Chapter 3:Displaying the Market Dynamics- Drivers, Trends and Challenges of Machine Learning Chip

Chapter 4:Presenting Machine Learning Chip Market Factor Analysis Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.

Chapter 5:Displaying the by Type, End User and Region 2013-2018

Chapter 6:Evaluating theleading manufacturers of Machine Learning Chip marketwhich consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile

Chapter 7:To evaluate the market by segments, by countries and by manufacturers with revenue share and sales by key countries in these various regions.

Chapter 8 & 9:Displaying the Appendix, Methodology and Data Source

Region wise analysis of the top producers and consumers, focus on product capacity, production, value, consumption, market share and growth opportunity in below mentioned key regions:

North America U.S., Canada, Mexico

Europe : U.K, France, Italy, Germany, Russia, Spain, etc.

Asia-Pacific China, Japan, India, Southeast Asia etc.

South America Brazil, Argentina, etc.

Middle East & Africa Saudi Arabia, African countries etc.

What the Report has in Store for you?

Industry Size & Forecast: The industry analysts have offered historical, current, and expected projections of the industry size from the cost and volume point of view

Future Opportunities: In this segment of the report, Machine Learning Chip competitors are offered with the data on the future aspects that the Machine Learning Chip industry is likely to provide

Industry Trends & Developments: Here, authors of the report have talked about the main developments and trends taking place within the Machine Learning Chip marketplace and their anticipated impact at the overall growth

Study on Industry Segmentation: Detailed breakdown of the key Machine Learning Chip industry segments together with product type, application, and vertical has been done in this portion of the report

Regional Analysis: Machine Learning Chip market vendors are served with vital information of the high growth regions and their respective countries, thus assist them to invest in profitable regions

Competitive Landscape: This section of the report sheds light on the competitive situation of the Machine Learning Chip market by focusing at the crucial strategies taken up through the players to consolidate their presence inside the Machine Learning Chip industry.

Key questions answered in this report

About Data Bridge Market Research:

An absolute way to forecast what future holds is to comprehend the trend today!Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market.

Contact:

US: +1 888 387 2818

UK: +44 208 089 1725

Hong Kong: +852 8192 7475

[emailprotected]

Go here to read the rest:
Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding...

Zoom says free users will get end-to-end encryption after all – The Verge

Zoom says it will begin allowing users of its videoconferencing software to enable end-to-end encryption of calls starting with a beta next month, the company announced on Wednesday. The feature wont be restricted to paid enterprise users, either. Its coming to both free and paid users, Zoom says, and it will be a toggle switch any call admin can turn on or disable, in the event they want to allow traditional phone lines or older conference room phones to join.

Zoom does not proactively monitor meeting content, and we do not share information with law enforcement except in circumstances like child sex abuse, a company spokesperson said at the time, following comments from Zoom CEO Eric Yuan during a call with investors after the companys quarterly earnings release. We plan to provide end-to-end encryption to users for whom we can verify identity, thereby limiting harm to these vulnerable groups. Free users sign up with an email address, which does not provide enough information to verify identity.

Zoom has also been facing harsh criticism since the beginning of the COVID-19 pandemic for failing to beef up its security despite huge surges in user growth as Zoom and similar services became virtual hangout tools during lockdowns. In late March, Zoom admitted that while it uses a standard web browser data encryption, it does not use end-to-end encryption. The company has spent the time since improving its security and working on a new encryption solution.

Yet it appears the company has figured out a workaround. To make this possible, Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message, Zoom explains in its blog post. Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools including our Report a User function we can continue to prevent and fight abuse.

Its not clear when the feature will launch for all users, but the beta is arriving in July and Zoom intends to have some level of permissions so account administrators can disable or enable it at the account or group level.

Read this article:
Zoom says free users will get end-to-end encryption after all - The Verge