Google Duo is coming to the web via Chrome; features Family mode, end-to-end encryption – Moneycontrol

Google also says that calls on Duo are end-to-end encrypted.

The coronavirus pandemic has led to a nationwide lockdown in India and many other countries. This has further led to the surge in usage of video calling apps as people stay at home while maintaining social distancing. Google is capitalising the demand and updating its video-calling app, Google Duo, with a host of new features.

Last month, Google increased the participants limit from eight to 12 on its mobile app. The search engine giant has now confirmed another new Google Duofeature, this time for the Web.

Google will roll-out support for Google Duo group video calls on Chrome in the coming days. The update will also bring with it a new layout that will let users see more people at the same time. To make getting together easier, youll also be able to invite anyone with a Google account to join a group call with just a link.

Further, Google has also announced a new Family Mode that lets you doodle on video calls for everyone to see and also surprise them with fun effects and masks that transform you into astronauts, cats and more.

Just start a video call, tap the menu icon and then tap Family to get started. You dont have to worry about accidental mutes or hang-ups because weve hidden those buttons while youre playing together, Google said in its blog post.

Moneycontrol Ready Reckoner

Now that payment deadlines have been relaxed due to COVID-19, the Moneycontrol Ready Reckoner will help keep your date with insurance premiums, tax-saving investments and EMIs, among others.

Moneycontrol Virtual Summit presents 'The Future of Indian Industry', powered by SalesforceRegister Now! and watch industry stalwarts forecast how India Inc will shape up in post COVID-19 world

Date: May 19

First Published on May 11, 2020 05:44 pm

The rest is here:
Google Duo is coming to the web via Chrome; features Family mode, end-to-end encryption - Moneycontrol

SUSE CEO on how it will take on IBM’s Red Hat, grow through recession – Business Insider

Last July, Melissa Di Donato took the reins of the German cloud infrastructure company SUSE. Around the same time, IBM acquired SUSE's biggest rival Red Hat in a $34 billion deal.

That sale combined with the likelihood of open source thriving during the coronavirus crisis is increasing Di Donato's confidence in the company.

While that might sound counterintuitive its fiercest competitor now has a larger war chest during an economic downturn Di Donato feels that the opportunity has never been greater. She put it bluntly: "Our biggest competitor was taken out by IBM."

Meanwhile, downturns make open source software more attractive to businesses looking for cost savings, she argued.

"We provide a free alternative that alleviates the need to invest in proprietary software," she told Business Insider, "And [we] still maintain the high level of innovation delivered through our community."

The nearly 30-year-old SUSE builds open source Linux and cloud infrastructure products and then charges business customers for add-ons like support. Its offerings are similar to those of Red Hat, although SUSE launched a year before its biggest rival did.

Di Donato's belief that SUSE is particularly well-suited to weather the crisis has outside support, too: Experts say that opens source software companies often grow stronger during economic downturns and that open source software contributions may also rise as more people stay home,

"The opportunity is endless," Di Donato said.

Di Donato's assertion that Red Hat's sale to IBM is akin to being "taken out," comes perhaps from her company's own history. SUSE's has been shuffled between different owners through a series of acquisitions over the years, from Novell to The Attachmate Group to Micro Focus. In 2019, Micro Focus sold SUSE to the private equity firm EQT for $2.5 billion, which made it an independent company once again.

Red Hat now has the resources and backing of its massive parent company but, at the same time,developers may feel less certain about its direction now that it's under IBM's umbrella. (Red Hat didn't not respond to a request for comment.)

If IBM hijacked Red Hat's services, it "would be doing their customers a huge disservice," Di Donato says.

Interestingly, IBM is one of SUSE's earliest and biggest partners, which hasn't shifted since it bought its competitor. They both still pay for each other's services.

"We're a big portion of their business that they're not willing to give up," she said. "While they do own Red Hat, the mantra with IBM always remains open and agnostic."

Beyond SUSE, Di Donato has decades of experience in enterprise tech. She was chief revenue officer of SAP's cloud portfolio and a veteran of Salesforce, IBM, PwC, and Oracle too. Notably, she's spent the bulk of her career in proprietary software instead of open source. Because of her experience, she said that she also knows how to communicate with giant customers and partners like SAP, Microsoft, and Dell.

"I come to this role with a very different perspective," Di Donato said. "The most important part of our business is our customers. Everything that we do pivots around customer business requirements."

While the seriousness of the coronavirus outbreak around the world started to really become apparent in late February and March, Di Donato says it's been on her mind since December because SUSE has a large workforce in China.

Ultimately, the company's corresponding adjustments haven't felt too dramatic because nearly 40% of its employees were working from hom even prior to the pandemic. That's not uncommon for open source companies.

"We like to think open source is quite ahead of the curve in adopting the workforce of the future: We embrace this new way of working together," Di Donato said. "It's normal for us to be communicating this way online. We have no IP to hide within our company. It's openly shared across all our competitors in the open source Linux world that we collaborate on together."

Got a tip?Contact this reporter via email atrmchan@businessinsider.com, Signal at646.376.6106,Telegram at @rosaliechan, orTwitter DM at@rosaliechan17. (PR pitches by email only, please.) Other types of secure messaging available upon request.

Link:
SUSE CEO on how it will take on IBM's Red Hat, grow through recession - Business Insider

Open Networking Foundation Releases Reference Design for Converged Multi-Access and Core – The Fast Mode

The Open Networking Foundation on Thursday announced the release of the COMAC (Converged Multi-Access and Core) Reference Design (RD) specification.

In parallel, the COMAC exemplar platform (open source software) was released and deployed by T-Mobile Poland. This specification was authored by network operators AT&T, China Unicom, Deutsche Telekom, Google, Turk Telecom/Netsia.

COMAC is a platform for converging 5G and LTE mobile networks together with broadband access networks, and this RD joins ONFs portfolio of RD specifications for broadband (SEBA), optical (ODTN), SDN fabric (Trellis) and Next-Generation SDN. COMAC, like all ONF RDs, is authored by operators who have committed to deploying open source solutions based on the specification.

The exemplar platform open source software release of COMAC v1.0 was released in late 2019, and went into deployment by T-Mobile Poland in early 2020.

Oguz Sunay, VP R&D, Mobility, ONFWith COMAC, ONFs operator partners are beginning to drive a sea of change and substantial advancement towards combining and converging what were once wholly separate networks: the network edge, RAN and operator core.

Read the original post:
Open Networking Foundation Releases Reference Design for Converged Multi-Access and Core - The Fast Mode

Venafi flies off with UK-based Kubernetes startup – www.channelweb.co.uk

Cybersecurity vendor Venafi is set to acquire London-based Kubernetes security specialist Jetstack.

Jetstack was founded five years ago and specialises in open source machine identity protection software for Kubernetes and cloud native ecosystems.

The acquisition comes hot on the heels of VMware's acquisition of another Kubernetes security outfit, Octarine.

"In the race to virtualise everything, businesses need faster application innovation and better security; both are mandatory," said Jeff Hudson, Venafi CEO.

"Most people see these requirements as opposing forces, but we don't. We see a massive opportunity for innovation.

"This acquisition brings together two leaders who are already working together to accelerate the development process while simultaneously securing applications against attack, and there's a lot more to do. Our mutual customers are urgently asking for more help to solve this problem because they know that speed wins, as long as you don't crash."

Jetstack's most popular open source software is cert-manager, which allows developers to quickly create, connect and consume certificates with Kubernetes and cloud native tools.

The two firms have been working together for the past two years on creating next generation machine identity protection in Kubernetes, as well as in the multi-cloud, service mesh and microservices ecosystems.

Matt Bates, CTO and co-founder of Jetstack, said: "We developed cert-manager to make it easy for developers to scale Kubernetes with consistent, secure and declared-as-code machine identity protection.

"The project has been a huge hit with the community and has been adopted far beyond our expectations. Our team is thrilled to join Venafi so we can accelerate our plans to bring machine identity protection to the cloud native stack, grow the community and contribute to a wider range of projects across the ecosystem."

Read more:
Venafi flies off with UK-based Kubernetes startup - http://www.channelweb.co.uk

The Pandemic Is Turbo-charging Government Innovation: Will It Stick? – Knowledge@Wharton

As the U.S. government steps in to address the fallout from the COVID-19 pandemic, solutions can and should be designed that realize long-term benefits and capture the full potential of technology innovation in the years to come, write the authors of this opinion piece John Goodman, Chief Executive Officer of Accenture Federal Services, and Ira Entis, Managing Director of Accenture Federal Services Growth and Strategy. The trick for making these solutions stick, they say, is transitioning from a focus on modernization to a culture of continuous innovation.

The worlds getting more crowded, more interconnected, more complicated. Not surprisingly, the challenges the U.S. confronts today are increasingly complex, fast-shifting, and harder if not impossible to tackle with traditional tools and methods. Just consider persistent challenges like cybersecurity, healthcare, and the one we are all most concerned about right now: the COVID-19 pandemic and its many troubling repercussions. As government steps in to address these issues, solutions can and should be designed that realize long-term benefits and capture the full potential of technology innovation in the years to come.

Challenges like these really bring into focus the many critical roles that government agencies play in our lives. The current environment underscores how important it is that our government operate with the latest innovations and capabilities in hand. Government leaders need data-derived insights at their disposal and advanced technology tools that allow them to move rapidly and collaboratively as their mission-focused workloads require.

For policy makers, todays COVID-19 crisis is heightening the urgency for government modernization: The massive $2 trillion coronavirus aid package passed by Congress in March includes $340 billion in new government appropriations, much of which will go toward government telework, telehealth, cybersecurity, and network bandwidth initiatives.

Just as many companies are employing artificial intelligence, machine learning, and advanced data analytics to rapidly model potential COVID-19 vaccines and anticipate where the pandemic is likely to spread, so too is government turning to innovation for assistance. To cite just a few examples:

As government increasingly adopts and applies innovative technologies whether in response to specific crises or to address other mission needs we all reap benefits down the road, many of which are often unforeseen at first. Innovations that are ubiquitous today, including the Internet, GPS, touchscreen display, smart phones, and voice-activated personal assistants, all stem from government investments.

Yet this impressive legacy of government-driven innovation often gets overshadowed by the outdated technology and archaic business processes that are still embedded across the government. Many federal agencies still rely on legacy IT systems and business processes some of which are decades old to run their day-to-day operations and businesses critical to national security and quality of life.

Government leaders need data-derived insights at their disposal and advanced technology tools that allow them to move rapidly and collaboratively as their mission-focused workloads require.

Until recently, government agencies have been relatively slow in adopting emerging technologies and commercial best practices cloud computing, artificial intelligence, robotic process automation (RPA), human-centered design, and customer experience, to name a few that have been powering positive disruptions in the commercial sector for years.

In the governments defense, there have been legitimate reasons for that delay. Federal agencies have been slowed in their efforts to embrace modern technologies and practices by archaic procurement systems. They have tried to adapt commercial-off-the-shelf (COTS) software to rigid, complex, decades-old internal business processes that are often rooted in law and shaped by highly prescriptive compliance regulations. Given the highly sensitive nature of their data and missions, they have legitimate privacy and security concerns to sort out with many commercial technologies. The changing nature of work has also added workforce reskilling costs when onboarding new technologies and practices. And then consider the scale and complexity of many government operations and missions that would make the application of new technologies challenging under any circumstance.

It turns out that the governments delays in technology adoption in the last two decades may have worked to its favor. Commercial tech companies during that time have steadily improved the security, portability, scale, ease of use, and interoperability of their offerings. Emerging capabilities such as software-defined everything, virtualization, containerization, open source software, API connectivity, advanced encryption, advanced data visualization, robotic process automation (RPA), and machine learning have evolved in recent years to the point where commercial technologies today are exceedingly more adaptable to government needs and use cases.

And, to their credit, federal agencies throughout that period have been busy overhauling their outdated bureaucratic approaches, refreshing their modernization goals, and overcoming many of the tech adoption hurdles they previously faced. Federal leaders have persistently and effectively pressed for reforms, experimentation, and workarounds to smooth out the many points of friction that have slowed modernization progress.

Government Transformation Is Already Under Way

Consequently, government today is truly poised to achieve transformational change. In fact, we are already seeing progress toward this.

Most importantly, federal agencies are turning the corner in their adoption of commercial cloud services. Cloud computing has emerged as the centerpiece of federal IT modernization efforts, and agencies now fully recognize the power of the cloud to help them improve and expand their mission capabilities, increase agility and responsiveness, contain costs, and enhance security. In short, federal agencies correctly view cloud adoption as pivotal in their ability to leap-frog from being technology laggards to technology leaders.

As a result, we are seeing agencies make unprecedented investments in cloud services: Federal cloud spending, which totaled $2.4 billion in 2014, has steadily accelerated to a projected $7.1 billion for this fiscal year, according to Bloomberg Government. Agencies, such as the departments of Health and Human Services, Veterans Affairs, and the Air Force, are leading the charge on cloud, while others, such as the departments of Education, Treasury, State, and Agriculture, are embarking on enterprise-wide IT modernizations today.

Importantly, these cloud investments are now unlocking billions of dollars in spending on cloud-enabled digital services, artificial intelligence (AI), and other capabilities that can help agencies translate their vast, accumulating stores of data into mission-advancing insights and operational efficiencies. Civilian and defense agencies across government are steadily expanding their annual investments in AI and web and mobile-based government services from roughly $4 billion in 2016 to a conservative estimate of more than $6.6 billion projected for this fiscal year.

Government today is truly poised to achieve transformational change. In fact, we are already seeing progress toward this.

With cloud capabilities in place, agencies are positioned to use emerging technologies to tackle case management backlogs, improve citizen services, and deliver more holistic responses to todays emerging and complex challenges, such as multi-domain operations in defense, public health, cybersecurity, and the transitioning economy.

Government agencies are also beginning to apply private sector concepts like human-centered design, customer experience, and behavioral science in their service deliveries. For example, more than two dozen agencies ranging from the IRS and Transportation Security Administration (TSA) to the Office of Federal Student Aid are designated as high-impact service providers which means they must measure and improve the impact of the customer experiences they deliver to citizens. Likewise, agencies this year plan to accelerate their adoption of robotic process automation (RPA) or bots to automate and modernize repeatable business processes, so they can transition employees to higher-value work.

We are also now seeing steady rollouts of modernized citizen-facing digital experiences. For example, the U.S. Patent and Trademark Office is developing AI-enabled research tools to help its staff process patent and trademark applications more efficiently. Likewise, Health and Human Services plans to employ AI, intelligent automation, blockchain, micro-services, machine learning, natural language processing, and RPA to support services like medication adherence, decentralized clinical trials, evidence management, outcomes-based pricing, and pharmaceutical supply chain visibility.

These rollouts represent meaningful advances in the way government is stepping up to the challenge of modernization. But perhaps the hardest challenge lies ahead, which is this: Government agencies will need to think about and metabolize innovation in fundamentally different ways going forward. Cloud, AI, and automation cannot be thought of as a one and done modernization project or even as a series of projects. Technologies will constantly advance, so agencies need a different mindset that views innovation as a non-stop journey of continuous evolution and adaptation. This means government agencies need to re-orient their strategic planning, budgeting, and cultures to think and plan in those terms.

Other new challenges will emerge as well. For example, the government needs to address how to maximize the use of its data and ensure that the data-driven capabilities they deploy are ethical, unbiased, and intuitive to use. In fact, we already see new tools in law enforcement, housing, commercial banking, and human resources that address the ethical use of AI. The Defense Department has also issued its own draft set of principles and recommendations on ethics and AI for department leaders to consider.

The Next Chapter in Government Innovation

So, what comes next in the governments innovation arc? As emerging technologies become more normalized within government environments, we expect to see agency leaders fundamentally shift their thinking about the role of technology in their agencies mission success, just as we have seen occur in the private sector.

To illustrate this point, think of the most innovation-driven companies. Most if not all of them fundamentally view themselves as technology companies first, regardless of what business they are in. Amazon.com, for example, views itself as an information technology company that employs digital capabilities to produce a tremendous user experience and highly synchronized logistics operation to support a large-scale retail business. Numerous other companies, including Federal Express, Disney and Capital One, to name just a few, invested in more robust digital capabilities so they could reinvent their own businesses and service-delivery models for a new era.

Likewise, government needs to do this as well. But this can only happen when policy makers and agency leaders begin to view IT as a foundational investment in their future mission performance rather than a line item business expense. Once this happens, this critical shift in mindset will set off a chain of new thinking among government employees and the public alike. As government begins experimenting and seeing successes as they invent new approaches to deliver services, conduct their business, and meet their missions, we will see the culture shift as well.

Technologies will constantly advance, so agencies need a different mindset that views innovation as a non-stop journey of continuous evolution and adaptation.

For example, as modernization efforts advance, technology will remove much of the repetitive, manual work now done by government employees, freeing them up to take on higher-impact work where they will be more empowered with access to data and modern tools so they can contribute more independently and directly to their agencies performance. However, to sustain this, government workplaces will need to build cultures that prioritize continuous, dynamic reskilling and training to keep pace with the velocity of change.

If managed well, we should see government begin to prioritize continuous innovation and use private sector best practices to produce a steady stream of future-focused pilot projects that can be iterated further and eventually scaled to wider applications. This can involve creating sandboxes and field labs that run innovation trials and pilot new technologies and approaches.

These efforts will lead to AI and data analytics becoming embedded in day-to-day enterprise operations and applied to massive amounts of structured and unstructured data. This will enable federal agencies to dramatically reduce the latency of information, produce greater insights, and better inform their policy and decision-making.

Eventually, government will move more aggressively beyond project-focused innovation efforts and transition to more scalable delivery frameworks and ecosystems. This means adopting open, standards-based technology platforms that enable agencies to be more future-ready, flexible, and capable of scaling innovation to far larger use cases. This is especially critical in an age where government agencies and the mission solutions they employ operate increasingly within a broader ecosystem of other government agencies, nonprofits, academia, start-ups, digital-savvy companies, and crowd-sourcing platforms.

Regardless of where the governments modernization journey leads from here, the important point is that the federal government is now poised to embrace the emerging technologies and innovative thought driving much of todays advances and make dramatic leap-frog advances as a result.

Continue reading here:
The Pandemic Is Turbo-charging Government Innovation: Will It Stick? - Knowledge@Wharton

The coronavirus might have weak spots. Machine learning could help find them. – News@Northeastern

Chemically speaking, proteins might be the most sophisticated molecules out there. Millions of different kinds of them live within our cells and work together as a fine-tuned orchestra catalyzing the biochemical reactions that keep us alive.

Few things in the world would function without proteinsnot the cells within your body, and certainly not SARS-CoV-2, the coronavirus responsible for COVID-19.

The proteins in the coronavirus facilitate its remarkable ability to infect human cells without resulting in visible symptoms of COVID-19 for long periods of time. Thats why researchers around the world have been investigating the roles of each of the 29 proteins packed inside SARS-CoV-2.

By learning more about each of those proteins at the molecular level, researchers want to pin down the exact parts of the SARS-CoV-2 proteins that enable it to bind itself to other proteins on the surface of human cells and enable the virus to replicate. The idea is to inhibit those chemical reactions right from the start, and render the coronavirus ineffective.

To analyze those protein interactions, Northeastern researchers are bringing another set of tools to study the coronavirus proteins down to their amino acids, the building blocks of all proteins.

Mary Jo Ondrechen, a professor of chemistry and chemical biology, wants to identify all of the amino acids responsible for the abilities of the coronavirus to infect and thrive at the expense of human cells. Together with Penny Beuning, a professor of chemistry and chemical biology, Ondrechen recently received a grant from the National Science Foundation to use machine learning algorithms and experimental lab work to do just that.

Proteins are long chains of molecules that function through cascading interactions with amino acids form other proteins. But those interactions dont always occur in the same place within the structure of a protein where the protein carries out its chemical reaction. Often, although the interactions happen outside of that site, they still control the reaction. A specific site within a protein can also control the action of different proteins, helping or hindering a specific chemical reaction.

Changes in protein behavior resulting from these networks of interactions, or from preventing interactions, are known as allosteric regulation. Ondrechens algorithm predicts many of these and other types of interactions based on the specific molecular structures of proteins.

Research led by her and Beuning could help researchers gain a better understanding of the biochemistry of SARS-CoV-2, and serve as the basis for developing new drugs to inhibit its infectious abilities.

Researchers around the world have been rushing to develop new chemicals that show promise as compounds that could hinder the coronavirus by interacting with its main active proteins.

Still, scientists are just beginning to understand many of the coronavirus proteins. And, Ondrechen says, there might be sites within those poorly understood proteins that researchers might be failing to notice.

The program, which Ondrechens lab invented in 2009, analyzes the chemical properties of each of the individual amino acids within a protein. It could predict the roles of important but subtle interactions in SARS-CoV-2 involving amino acids that arent directly linked to the main reaction sites, and which would be too difficult to analyze with conventional bioinformatic research.

In the main protease, everybody knows where the catalytic site is, in the RNA transferase, everybody knows where the catalytic site is, Ondrechen says. Our technology is special because we could predict exo-sites, allosteric sites, and other binding sites or interaction sites that can control.

The program will run those predictions against databases that include tens of thousands of compounds with anti-viral properties and compounds found in food, all in a major attempt to find proteins that might hit the predicted sites of protein interaction.

Once the program runs the computational analysis to find candidate proteins to inhibit SARS-CoV-2, it will guide Beunings experimental tests in her lab.

Well be looking at the protein level: Do the compounds actually bind those proteins, and do they modulate the activity of the protein? Beuning says. Ideally, they would inhibit the activity of the protein, and then impair the virus.

For the past 10 years, Ondrechen and Beuning have been combining their computational and experimental power to understand such questions as how proteins control the production of our DNA, and how proteins enable our bodies to carry out some of the most important metabolic functions.

Now, they are planning to move as fast as possible to identify important protein interactions in SARS-CoV-2, test them in the lab, and move on with further tests in live organisms.

Our plans are to finish in six months, Ondrechen says. If we come up with interesting compounds in vitro, hopefully we can find a collaborator that could do in vivo testing.

For media inquiries, please contact media@northeastern.edu.

Read the original post:
The coronavirus might have weak spots. Machine learning could help find them. - News@Northeastern

Prosper Teams up With AWS Machine Learning Marketplace to Expand Access to China Consumer Targeting Models – Business Wire

WORTHINGTON, Ohio--(BUSINESS WIRE)--Prosper Insights & Analytics, in cooperation with the AWS Machine Learning Marketplace, has expanded their suite of China consumer marketing models. The models are created from data derived from the largest continuous survey of Chinese consumers, the Prosper China Quarterly. Prosper has been collecting the China Quarterly since 2006.

The propensity models are developed using AWS SageMaker advanced analytic tools and can be accessed through the AWS Machine Learning Marketplace. All Prosper models are 100% privacy compliant and never use any PII in any part of the process from collection through analysis.

All models are scored with metrics for accuracy, updated regularly and provide marketers with an enhanced targeting opportunity for the China market. Bespoke models available upon request. For more information, click here.

About Prosper Insights & Analytics

Prosper Insights & Analytics is a global leader in consumer intent data serving financial services, marketing technology, retail and marketing industries. We provide global authoritative market information on US and China consumers via curated insights and analytics. By integrating Prosper's unique consumer data with a variety of other data, including behavioral, attitudinal and media, Prosper helps companies accurately predict consumers' future behavior and optimize marketing efforts and improve the effectiveness of demand generation campaigns. http://www.ProsperModelFactory.com

More:
Prosper Teams up With AWS Machine Learning Marketplace to Expand Access to China Consumer Targeting Models - Business Wire

ValleyML is launching a Machine Learning and Deep Learning Boot Camp from July 14th to Sept 10th and AI Expo Series from Sept 21st to Nov 19th 2020….

Over the past year, in collaboration with IEEE and ACM, ValleyML has hosted numerous talks on contemporary topics in data science, machine learning, and artificial intelligence - bringing together technical experts and inquisitive audiences. During these times of unprecedented global lockdowns due to COVID-19 pandemic, now, more than ever, we need to bring people together. To that end, ValleyML will be expanding its outreach with its Virtual and Global events.

SANTA CLARA, Calif., May 14, 2020 /PRNewswire/ --ValleyML, Valley Machine Learning and Articial Intelligence is the most active and important community of ML & AI Companies and Start-ups, Data Practitioners, Executives and Researchers. We have a global outreach to close to 200,000 professionals in AI and Machine Learning. The focus areas of our members are AI Robotics, AI in Enterprise and AI Hardware. We plan to cover the state-of-the-art advancements in AI technology.ValleyML sponsors include UL, MINDBODY Inc., Ambient Scientific Inc., SEMI, Intel, Western Digital, Texas Instruments, Google, Facebook, Cadence andXilinx.

ValleyML Machine Learning and Boot Camp -2020Build a solid foundation of Machine Learning / Deep Learning principles and apply the techniques to real-world problems. Get IEEE PDH Certificate. Virtual Live Boot Camp from July 14th-Sept 10th.Description. Enroll and Learn at ValleyML Live Learning Platform(coupons: valleyml40 Register by June 1st for 40% off. valleyml25 Register by July 1st for 25% off.)

Global Call for Presentations & Sponsors for ValleyML AI Expo 2020 conference series (Global & Virtual).A unified call for proposals from industry for ValleyML's AI Expo events focused on Hardware, Enterprise and Robotics is now open at ValleyML2020. Submit by June 1st to participate in a virtual and global series of 90-minute talks and discussions from Sept 21st to Nov 19th on Mondays-Thursdays. Sponsor AI Expo!Limited sponsorship opportunities available. These highly focused events welcome a community of CTOs, CEOs, Chief Data Scientists, product management executives and delegates from some of the world's top technology companies.

Committee for ValleyML AI Expo 2020:

Program Chair for AI Enterprise and AI Robotics series:

Mr. Marc Mar-Yohana, Vice President at UL.

Program Chair for AI Hardware series:

Mr. George Williams, Director of Data Science at GSI Technology.

General Chair:

Dr. Kiran Gunnam, Distinguished Engineer, Machine Learning and Computer Vision, Western Digital.

View original content:http://www.prnewswire.com/news-releases/valleyml-is-launching-a-machine-learning-and-deep-learning-boot-camp-from-july-14th-to-sept-10th-and-ai-expo-series-from-sept-21st-to-nov-19th-2020-virtual-and-global-live-events-301059663.html

SOURCE ValleyML

Read the original post:
ValleyML is launching a Machine Learning and Deep Learning Boot Camp from July 14th to Sept 10th and AI Expo Series from Sept 21st to Nov 19th 2020....

Waymo Develops a Machine Learning Model to Predict the Behavior of Other Road Users for its Self-Driving Vehicles – FutureCar

Author: Eric Walz

The emerging field of machine learning has important uses in a variety of fields, such as finance, healthcare and business. For self-driving cars, machine learning is an important tool that can be used to predict the behavior of other road users, such as other human drivers, pedestrians and bicyclists.

For Alphabet subsidiary Waymo, machine learning is one of its most important tools in its arsenal to build the world's best autonomous driving system it calls "Waymo Driver" that performs better than a human driver.

However, unlike a computer, human drivers have the ability to anticipate and predict what others on the road might do and can learn from past experiences, something that's difficult to train a computer to do and requires lots of processing power. So Waymo developed is own machine learning model that can do the same job and with less compute.

For example, when approaching an intersection a human driver might anticipate that another driver traveling in the opposite direction will make a left turn in their path or a pedestrian may enter the roadway. By anticipating this behavior the driver can mentally prepare to brake if needed. Predicting these types of behaviors for a computer is challenging for engineers working on self-driving vehicles.

That's where machine learning comes into play, allowing Waymo's autonomous vehicles to make better decisions. Machine learning is commonly used to model and reduce some of this complexity, thereby enabling the self-driving system to learn new types of behavior.

Waymo collects data from the real world from driving millions of miles and billions more miles in computer simulation built from data collected from its fleet. To navigate, Waymo's autonomous vehicles rely on highly complex, high-definition maps and vehicle sensor data. However, this data alone is not enough to make predictions, according to Waymo.

Simplifying a Complex Scene

The behavior of other road users is often complicated and difficult to capture with just map-derived traffic rules because driving patterns vary and human drivers often break the rules they're supposed to follow.

The most popular way to incorporate highly detailed, centimeter-level maps into behavior prediction models is by rendering the map into pixels and encoding all of the scene information, such as traffic signs, crosswalks, road lanes, and road boundaries, with a convolutional neural network (CNN).

However, this method requires a tremendous amount of processor power and takes time (latency), which is not ideal for a self-driving vehicle that needs to make decisions in a fraction of a second.

To address these issues and make better predictions Waymo developed a new model it calls "VectorNet", that provides more accurate behavior predictions while using less compute than CNNs, the company says. VectorNet essentially takes a highly complex scene and simplifies it using vectors so it can be processed with less computing power.

Map features can be simplified into vectors, which are easier for machine learning models to process.

This complex scene can be broken down into vectors to make is easier to process.

For example, an intersection crosswalk can be represented as a polygon defined by several points and a stop sign can be represented by a single point. Road curves can be approximately represented as polylines by "connected the dots." These polylines are then further split into vector fragments.

In this way, Waymo's engineers are able to represent all the road features and the trajectories of the other objects as a set of simplified vectors instead of a highly complex scene, which is much more difficult to work with. With this simplified view, Waymo designed VectorNet to effectively process its vehicle sensor data and map inputs.

The neural network is implemented to capture the relationships between various vectors. These relationships occur when, for example, a car enters an intersection or a pedestrian approaches a crosswalk. Through learning such interactions between road features and object trajectories, VectorNet's data-driven, machine learning-based approach allows Waymo to better predict other agents' behavior by learning from different behavior patterns.

Waymo proposed a novel hierarchical graph neural network. The first level is composed of polyline subgraphs. Then VectorNet gathers information within each polyline. In the second level called "global interaction graph", VectorNet exchanges information among polylines.

Here is the simplfied intersection with the input vectors that are converted to polyline subgraphs.

To further boost VectorNet's capabilities and understanding of the real world, Waymo trained the system to learn from context clues to make inferences about what could happen next around the vehicle to make improved behavior predictions.

For example, important scene information can often be obscured while driving, such as a tree branch blocking a stop sign. When this happens to a human driver, they can draw upon past experiences about the possibility of the stop sign being there, although they cannot see it. Machine learning makes these types of predictions using inference.

To further improve the accuracy of VectorNet, Waymo randomly masks out map features during training, such as a stop sign at a four-way intersection and requiring the CNN to complete it.

In this way, VectorNet can further improve the Waymo Driver's understanding of the world around it and be better prepared for any unexpected situations.

The intersection is broken down to create a global interaction graph.

Waymo validated the performance of VectorNet with the task of trajectory prediction, an important task for a self-driving vehicle that interacts with human drivers on the road. Compared with ResNet-18, one of the most advanced and widely used CNNs, VectorNet achieves up to 18% better performance while using only 29% of the parameters and consuming just 20% of the computation when there are 50 agents (other vehicles, pedestrians) per scene, Waymo reported.

Also this week, Waymo announced its latest funding round of $750 million. With the new funding, Waymo has raised $3 billion since March. The company is working on self-driving cars, commercial robotaxis and self-driving trucks that will all be powered by its advanced AI software.

Visit link:
Waymo Develops a Machine Learning Model to Predict the Behavior of Other Road Users for its Self-Driving Vehicles - FutureCar

A Lightning-Fast Introduction to Deep Learning and TensorFlow 2.0 – Built In

From navigating to a new place to picking out new music, algorithms have laid the foundation for large parts of modern life. Similarly, artificial Intelligence is booming because it automates and backs so many products and applications. Recently, I addressed some analytical applications for TensorFlow. In this article, Im going to lay out a higher-level view of Googles TensorFlow deep learning framework, with the ultimate goal of helping you to understand and build deep learning algorithms from scratch.

Over the past couple of decades, deep learning has evolved rapidly, leading to massive disruption in a range of industries and organizations. The term was coined in 1943 when Warren McCulloch and Walter Pitts created a computer model based on neural networks of a human brain, creating the first artificial neural networks (or ANNs). Deep learning now denotes a branch of machine learning that deploys data-centric algorithms in real-time.

Backpropagation is a popular algorithm that has had a huge impact in the field of deep learning. It allows ANNs to learn by themselves based on the errors they generate while learning. To further enhance the scope of an ANN, architectures like Convolutional Neural Networks, Recurrent Neural Networks, and Generative Networks have come into the picture. Before we delve into them, lets first understand the basic components of a neural network.

Neurons and Artificial Neural Networks

An artificial neural network is a representational framework that extracts features from the data its given. The basic computational unit of an ANN is the neuron. Neurons are connected using artificial layers through which the information passes. As the information flows through these layers, the neural network identifies patterns between the data. This type of processing makes ANNs useful for several applications, such as for prediction and classification.

Now lets take a look at the basic structure of an ANN. It consists of three layers: the input layer, the output layer, which is always fixed or constant, and the hidden layer. Inputs initially pass through an input layer. This layer always accepts a constant set of dimensions. For instance, if we wanted to train a classifier that differentiates between dogs and cats, the inputs (in this case, images) should be of the same size. The input then passes through the hidden layers and the network updates the weights and recognizes the patterns. In the final step, we classify the data at the output layer.

Weights and Biases

Every neuron inside a neural network is associated with parameters, weight and bias. The weight is an integer that controls the signals between any two neurons. If the output is desirable, meaning that the output is in proximity to the one that we expected it to produce, then the weights are ideal. If the same network is generating an erroneous output thats far away from the actual one, then the network alters the weights to improve the subsequent results.

Bias, the other parameter, is the algorithms tendency to consistently learn the wrong thing by not taking into account all the information in the data. For the model to be accurate, bias needs to be low. If there are inconsistencies in the dataset, like missing values, fewer data tuples, or erroneous input data, the bias would be high and the predicted values could be wrong.

Working of a Neural Network

Before we get started with TensorFlow, lets examine how a neural network produces an output with weights, biases, and input by taking a look at the first neural network, called Perceptron, which dates back to 1958. The Perceptron network is a simple binary classifier. Understanding how this works will allow us to comprehend the workings of a modern neuron.

The Perceptron network is a supervised machine learning technique that uses a binary classifier function by mapping a vector of binary variables to a single binary output. It works as follows:

Multiply the inputs (x1, x2, x3) of the network to their corresponding weights (w1, w2, w3).

Add the multiplied weights and inputs together. This is called the weighted sum, denoted by, x1*w1 + x2*w2 +x3*w3

Apply the activation function. Determine whether the weighted sum is greater than a threshold (say, 0.5), if yes, assign 1 as the output, otherwise assign 0. This is a simple step function.

Of course, Perceptron is a simple neural network that doesnt wholly consider all the concepts necessary for an end-to-end neural network. Therefore, lets go over all the phases that a neural network has to go through to build a sophisticated ANN.

Input

A neural network has to be defined with the number of input dimensions, output features, and hidden units. All these metrics fall in a common basket called hyperparameters. Hyperparameters are numeric values that determine and define the neural network structure.

Weights and biases are set randomly for all neurons in the hidden layers.

Feed Forward

The data is sent into the input and hidden layers, where the weights get updated for every iteration. This creates a function that maps the input with the output data. Mathematically, it is defined asy=f(x), where y is the output, x is the input, and f is the activation function.

For every forward pass (when the data travels from the input to the output layer), the loss is calculated (actual value minus predicted value). The loss is again sent back (backpropagation) and the network is retrained using a loss function.

Output error

The loss is gradually reduced using gradient descent and loss function.

The gradient descent can be calculated with respect to any weight and bias.

Backpropagation

We backpropagate the error that traverses through each and every layer using the backpropagation algorithm.

Output

By minimizing the loss, the network re-updates the weights for every iteration (One Forward Pass plus One Backward Pass) and increases its accuracy.

As we havent yet talked about what an activation function is, Ill expand that a bit in the next section.

Activation Functions

An activation function is a core component of any neural network. It learns a non-linear, complex functional mapping between the input and the response variables or output. Its main purpose is to convert an input signal of a node in an ANN to an output signal. That output signal is the input to the subsequent layer in the stack. There are several types of activation functions available that could be used for different use cases. You can find a list comprising the most popular activation functions along with their respective mathematical formulae here.

Now that we understand what a feed forward pass looks like, lets also explore the backward propagation of errors.

Loss Function and Backpropagation

During training of a neural network, there are too many unknowns to be deciphered. As a result, calculating the ideal weights for all the nodes in a neural network is difficult. Therefore, we use an optimization function through which we could navigate the space of possible ideal weights to make good predictions with a trained neural network.

We use a gradient descent optimization algorithm wherein the weights are updated using the backpropagation of error. The term gradient in gradient descent refers to an error gradient, where the model with a given set of weights is used to make predictions and the error for those predictions is calculated. The gradient descent optimization algorithm is used to calculate the partial derivatives of the loss function (errors) with respect to any weight w and bias b. In practice, this means that the error vectors would be calculated commencing from the final layer, and then moving towards the input layer by updating the weights and biases, i.e., backpropagation. This is based on differentiations of the respective error terms along each layer. To make our lives easier, however, these loss functions and backpropagation algorithms are readily available in neural network frameworks such as TensorFlow and PyTorch.

Moreover, a hyperparameter called learning rate controls the rate of adjustment of weights of a network with respect to the gradient descent. The lower the learning rate, the slower we travel down the slope (to reach the optimum, or so-called ideal case) while calculating the loss.

TensorFlow is a powerful neural network framework that can be used to deploy high-level machine learning models into production. It was open-sourced by Google in 2015. Since then, its popularity has increased, making it a common choice for building deep learning models. On October 1st, a new, stable version got released, called TensorFlow 2.0, with a few major changes:

Eager Execution by Default - Instead of creating tf.session(), we can directly execute the code as usual Python code. In TensorFlow 1.x, we had to create a TensorFlow graph before computing any operation. In TensorFlow 2.0, however, we can build neural networks on the fly.

Keras Included - Keras is a high-level neural network built on top of TensorFlow. It is now integrated into TensorFlow 2.0 and we can directly import Keras as tf.keras, and thereby define our neural network.

TF Datasets - A lot of new datasets have been added to work and play with in a new module called tf.data.

1.0 Support: All the existing TensorFlow 1.x code can be executed using TensorFlow 2.0; we need not modify any of our previous code.

Major Documentation and API cleanup changes have also been introduced.

The TensorFlow library was built based on computational graphs a runtime for executing such computational graphs. Now, lets perform a simple operation in TensorFlow.

Here, we declared two variables a and b. We calculated the product of those two variables using a multiplication operation in Python (*) and stored the result in a variable called prod. Next, we calculated the sum of a and b and stored them in a variable named sum. Lastly, we declared the result variable that would divide the product by the sum and then would print it.

This explanation is just a Pythonic way of understanding the operation. In TensorFlow, each operation is considered as a computational graph. This is a more abstract way of describing a computer program and its computations. It helps in understanding the primitive operations and the order in which they are executed. In this case, we first multiply a and b, and only when this expression is evaluated, we take their sum. Later, we take prod and sum, and divide them to output the result.

TensorFlow Basics

To get started with TensorFlow, we should be aware of a few essentials related to computational graphs. Lets discuss them in brief:

Variables and Placeholders: TensorFlow uses the usual variables, which can be updated at any point of time, except that these need to be initialized before the graph is executed. Placeholders, on the other hand, are used to feed data into the graph from outside. Unlike variables, they dont need to be initialized.Consider a Regression equation, y = mx+c, where x and y are placeholders, and m and c are variables.

Constants and Operations: Constants are the numbers that cannot be updated. Operations represent nodes in the graph that perform computations on data.

Graph is the backbone that connects all the variables, placeholders, constants, and operators.

Prior to installing TensorFlow 2.0, its essential that you have Python on your machine. Lets look at its installation procedure.

Python for Windows

You can download it here.

Click on the Latest Python 3 release - Python x.x.x. Select the option that suits your system (32-bit - Windows x86 executable installer, or 64-bit - Windows x86-64 executable installer). After downloading the installer, follow the instructions that are displayed on the setup wizard. Make sure to add Python to your PATH using environment variables.

Python for OSX

You can download it here.

Click on the Latest Python 3 release - Python x.x.x. Select macOS 64-bit installer,and run the file.

Python on OSX can also be installed using Homebrew (package manager).

To do so, type the following commands:

Python for Debian/Ubuntu

Invoke the following commands:

This installs the latest version of Python and pip in your system.

Python for Fedora

Invoke the following commands:

This installs the latest version of Python and pip in your system.

After youve got Python, its time to install TensorFlow in your workspace.

To fetch the latest version, pip3 needs to be updated. To do so, type the command

Now, install TensorFlow 2.0.

This automatically installs the latest version of TensorFlow onto your system. The same command is also applicable to update the older version of TensorFlow.

The argument tensorflow in the above command could be any of these:

tensorflow Latest stable release (2.x) for CPU-only.

tensorflow-gpu Latest stable release with GPU support (Ubuntu and Windows).

tf-nightly Preview build (unstable). Ubuntu and Windows include GPU support.

tensorflow==1.15 The final version of TensorFlow 1.x.

To verify your install, execute the code:

Now that you have TensorFlow on your local machine, Jupyter notebooks are a handy tool for setting up the coding space. Execute the following command to install Jupyter on your system:

Now that everything is set up, lets explore the basic fundamentals of TensorFlow.

Tensors have previously been used largely in math and physics. In math, a tensor is an algebraic object that obeys certain transformation rules. It defines a mapping between objects and is similar to a matrix, although a tensor has no specific limit to its possible number of indices. In physics, a tensor has the same definition as in math, and is used to formulate and solve problems in areas like fluid mechanics and elasticity.

Although tensors were not deeply used in computer science, after the machine learning and deep learning boom, they have become heavily involved in solving data crunching problems.

Scalars

The simplest tensor is a scalar, which is a single number and is denoted as a rank-0 tensor or a 0th order tensor. A scalar has magnitude but no direction.

Vectors

A vector is an array of numbers and is denoted as a rank-1 tensor or a 1st order tensor. Vectors can be represented as either column vectors or row vectors.

A vector has both magnitude and direction. Each value in the vector gives the coordinate along a different axis, thus establishing direction. It can be depicted as an arrow; the length of the arrow represents the magnitude, and the orientation represents the direction.

Matrices

A matrix is a 2D array of numbers where each element is identified by a set of two numbers, row and column. A matrix is denoted as a rank-2 tensor or a 2nd order tensor. In simple terms, a matrix is a table of numbers.

Tensors

A tensor is a multi-dimensional array with any number of indices. Imagine a 3D array of numbers, where the data is arranged as a cube: thats a tensor. When its an nD array of numbers, that's a tensor as well. Tensors are usually used to represent complex data. When the data has many dimensions (>=3), a tensor is helpful in organizing it neatly. After initializing, a tensor of any number of dimensions can be processed to generate the desired outcomes.

TensorFlow represents tensors with ease using simple functionalities defined by the framework. Further, the mathematical operations that are usually carried out with numbers are implemented using the functions defined by TensorFlow.

Firstly, lets import TensorFlow into our workspace. To do so, invoke the following command:

This enables us to use the variable tf thereafter.

Now, lets take a quick overview of the basic operations and math, and you can simultaneously execute the code in the Jupyter playground for a better understanding of the concepts.

tf.Tensor

The primary object in TensorFlow that you play with is tf.Tensor. This is a tensor object that is associated with a value. It has two properties bound to it: data type and shape. The data type defines the type and size of data that will be consumed by a tensor. Possible types include float32, int32, string, et cetera. Shape defines the number of dimensions.

tf.Variable()

The variable constructor requires an argument which could be a tensor of any shape and type. After creating the instance, this variable is added to the TensorFlow graph and can be modified using any of the assign methods. It is declared as follows:

Output:

tf.constant()

The tensor is populated with a value, dtype, and, optionally, a shape. This value remains constant and cannot be modified further.

See the rest here:
A Lightning-Fast Introduction to Deep Learning and TensorFlow 2.0 - Built In