Top Machine Learning Projects Launched By Google In 2020 (Till Date) – Analytics India Magazine

It may be that time of the year when new year resolutions start to fizzle, but Google seems to be just getting started.The tech giant has been building tools and services to bring in the benefits of artificial intelligence (AI) to its users. The company has begun upping its arsenal of AI-powered products with a string of new releases this month alone.

Here is a list of the top products launched by Google in January 2020.

Although first introduced in 2014, the latest iterations of sequence-to-sequence (seq2seq) AI models have strengthened the capability of key text-generating tasks including sentence formation and grammar correction. Googles LaserTagger, which the company has open-sourced, speeds up the text generation process and reduces the chances of errors

Compared to traditional seq2seq methods, LaserTagger computes predictions up to 100 times faster, making it suitable for real-time applications. Furthermore, it can be plugged into an existing technology stack without adding any noticeable latency on the user side because of its high inference speed. These advantages become even more pronounced when applied at a large scale.

The company has expanded its Coral lineup by unveiling two new Coral AI products Coral Dev Board Mini and Coral Accelerator Module. Announced ahead of the Consumer Electronics Show (CES) this year, the latest addition to the Coral family followed a successful beta run of the platform in October 2019.

The Coral Accelerator Module is a multi-chip package that encapsulates the companys custom-designed Edge Tensor Processing Unit (TPU). The chip inside the Coral Dev Board is designed to execute multiple computer vision models at 30 frames per second or a single model at over 100fps. Users of this technology have said that it is easy to integrate into custom PCB designs.

Coral Accelerator Module, a new multi-chip module with Google Edge TPU.

Google has also released the Coral Dev Board Mini which provides a smaller form-factor, lower-power, and a cost-effective alternative to the Coral Dev Board.

Caption: The Coral Dev Board Mini is a cheaper, smaller and lower power version of the Coral Dev Board

Officially announced in March 2019, the Coral products were intended to help developers work more efficiently by reducing their reliance on connections to cloud-based systems by creating AI that works locally.

Chatbots are one of the hottest trends in AI owing to its tremendous growth in applications. Google has added to the mix with its human-like multi-turn open-domain version. Meena has been trained in an end-to-end fashion on data mined from social media conversations held in the public domain with a totalling 300GB+ text data. Furthermore, it is massive in size with 2.6B parameter neural network and has been trained to minimize perplexity of the next token.

Furthermore, Googles human evaluation metric called Sensibleness and Specificity Average (SSA) also captures the key elements of a human-like multi-turn conversation, making this chatbot even more versatile. In a blog post, Google had claimed that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots.

Plugged as an important development of Googles Transformer the novel neural network architecture for language understanding Reformer is intended to handle context windows of up to 1 million words, all on a single AI accelerator using only 16GB of memory.

Google had first mooted the idea of a new transformer model in a research paper in collaboration with UC Berkeley in 2019. The core idea behind this model was self-attention, and the ability to attend to different positions of an input sequence to compute a representation of that sequence elaborated in one of our articles.

Today, Reformer can process whole books concurrently and that too on a single gadget, thereby exhibiting great potential.

Google has time and again reiterated its commitment to the development of AI. Seeing it as more profound than fire or electricity, it firmly believes that this technology can eliminate many of the constraints we face today.

The company has also delved into research anchored around AI that is spread across a host of sectors, whether it be detecting breast cancer or protecting whales or other endangered species.

comments

Read the original post:
Top Machine Learning Projects Launched By Google In 2020 (Till Date) - Analytics India Magazine

Speechmatics and Soho2 apply machine learning to analyse voice data – Finextra

Speechmatics and Soho2 have today announced their partnership to deliver consulting services to their customers, and a new product offering Speech2.

Soho2 has significant depth in delivering machine-learning driven solutions to market. The new product from Soho2 will give companies in legal, compliance and contact centers the invaluable ability to analyze voice data garnered from calls. Speech2 enables companies to bring new levels of flexibility to data analysis for high-volume, real time or recorded voice data through mission-critical, accurate speech recognition.

Using AI and machine learning, the solution will deliver an unparalleled ability to derive insight from voice data and also manage risk. The product can be deployed in any customer-managed environment to enable control over personal or sensitive data to be retained.

As part of the new product offering, Speechmatics - a UK leader in any context speech recognition technology - will transcribe voice data into accurate, contextual understanding for analysis. Speech2 will allow businesses to identify and address risks, as well as pinpoint missing sales opportunities. The product can also identify cases of fraud, while the legal industry can identify risks with the data, and even aid with event reconstruction.

George Tziahanas, Managing Partner of Soho2, said: Our experience demonstrates the potential for great innovation in machine learning, delivering huge commercial value to enterprises across industries. We teamed up with Speechmatics to ensure our latest services and product deliver the best speech recognition technology on the market. The partnership enables us to innovate with voice securely which is crucial to our customers and industries.

Jeff Palmer, VP of Sales at Speechmatics, added: Speech2 will deliver unparalleled insights and risk management abilities, using Speechmatics any-context speech recognition engine. Soho2 also brings depth in services that deliver high-value machine learning solutions, which will benefit their customer-base. Were excited to be working with Soho2 and seeing how their customers derive value from their voice data and view it with a renewed sense of curiosity.

More here:
Speechmatics and Soho2 apply machine learning to analyse voice data - Finextra

Reinforcement Learning: An Introduction to the Technology – Yahoo Finance

NEW YORK, Feb. 3, 2020 /PRNewswire/ --

Report Includes:- A general framework for deep Reinforcement Learning (RL) also known as a semi-supervised learning model in machine learning paradigm

Read the full report: https://www.reportlinker.com/p05843529/?utm_source=PRN

- Assessing the breadth and depth of RL applications in real-world domains, including increased data efficiency and stability as well as multi-tasking- Understanding of the RL algorithm from different aspects; and persuade the decision makers and researchers to put more efforts on RL research

Reasons for Doing This Report:These days, machine learning (ML), which is a subset of computer science, is one of the most rapidly growing fields in the technology world.It is considered to be a core field for implementing artificial intelligence (AI) and data science.

The adoption of data-intensive machine learning methods like reinforcement learning is playing a major role in decision-making across various industries such as healthcare, education, manufacturing, policing, financial modelling and marketing.The growing demand for more complex machine working is driving the demand of learning-based methods in the ML field.

Reinforcement learning also presents a unique opportunity to address the dynamic behavior of systems.This study was conducted in order to understand the current state of reinforcement learning and track its adoption along various verticals, and it seeks to put forth ways to fully exploit the benefits of this technology.This study will serve as a guide and benchmark for technology vendors, manufacturers of the hardware that supports AI, as well as the end users who will finally use this technology.

Decisionmakers will find the information useful in developing business strategies and in identifying areas for research and development.

Read the full report: https://www.reportlinker.com/p05843529/?utm_source=PRN

About Reportlinker ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________ Contact Clare: clare@reportlinker.com US: (339)-368-6001 Intl: +1 339-368-6001

Story continues

View original content:http://www.prnewswire.com/news-releases/reinforcement-learning-an-introduction-to-the-technology-300997487.html

SOURCE Reportlinker

Here is the original post:
Reinforcement Learning: An Introduction to the Technology - Yahoo Finance

The Role of AI and Machine Learning in Cybersecurity – Analytics Insight

AI and machine learning are the kind of buzzwords that generate a lot of interest; hence, they get thrown around all the time. But what do they actually mean? And are they as instrumental to the future of cybersecurity as many believe?

When a large set of data is involved, having to analyze it all by hand seems like a nightmare. Its the kind of work that one would describe as boring and tedious. Not to mention the fact it would take a lot of staring at the screen to find what youve set out to discover.

The great thing about machines and technology is that unlike humans it never gets tired. Its also better geared for being able to notice patterns. Machine learning is what you get when you reach the point of teaching your tools on how to spot patterns. The AI helps you interpret it all better and make the solution self-sufficient.

Cybersecurity solutions (antivirus scanners in particular) are all about spotting a pattern and planning the right response. These scanners rely on heuristic modeling. It gives them the ability to recognize a piece of code as malicious, even though it might be the case that no one has flagged it as such before. In essence, it has plenty to do with teaching the software to recognize and alert you when something is out of the ordinary.

As soon as something oversteps the threshold of tolerance, it triggers an alarm. From there on out, the rest is up to the user. For instance, the user may instruct the antivirus software to move the infected file to quarantine. It can do so with or without human intervention.

Applying AI to cybersecurity solutions is taking things up a notch. Without it, the option of having the software learn on its own by observing would not be possible.

Imagine having an entity working in the background that knows you so well that it can predict your every move. It might be slight nuances. For example, the way you move your mouse or the parts of the web youre browsing on a frequent basis. Even the order of the applications you launch upon logging in.

Without having to introduce yourself, the AI would get to know you and your habits pretty well. Thus, it would form a digital fingerprint of you. It sounds scary, but it could come in handy. For instance, it could raise the alarm if an unauthorized individual ever gets access to your PC.

Of course, observing your behavior is not the end of what employment of AI and machine learning can do. Why not do the same thing for computer processes?

Imagine having to monitor what programs are running in the background yourself. Tracking how much resources they consume all day, every day, by hand. It doesnt sound enjoyable now, does it? But its the work AI excels at.

Without lifting a finger, youd have a powerful watchdog that would start barking as soon as something is out of the ordinary. For instance, it could alert you about malicious operating system behaviors. You would know right away about crypto mining malware or other types of threats affecting your computer.

The smart malware designers make it so that your systems CPU usage gets off the charts only when youre not using the PC. Theres no way to spot such a thing while youre away from the keyboard. Unless you have AI-powered cybersecurity solutions to track it all for you 24/7.

Webmasters keep trying to fend off bot traffic and automated scripts. These are used for automatic data scraping and similar activities. For instance, someone could write a script to harvest every bit of contact details on the website. They can then send unsolicited offers to all those contacts. Even when they dont scrape contacts, no one wants bot traffic because it consumes valuable server resources and slows everything down for legitimate browsers. Thus, it harms the user experience.

The simple solution is to block a range of IP addresses. But by using a VPN (you can read more about it here) server or a proxy, a script can get around the obstacle. Now lets introduce some AI into the equation. By observing every browsers activity, it would be able to recognize repetitive behavior. It would associate it with an IP address thats currently browsing, then flag it. Sure, a script may discard an IP address and try with a new one. But the fingerprint left by its activities would remain since its rather much pattern-based. In the end, the new IP could be flagged much faster by automated observation.

Since they came to be, AI and machine learning have changed the world of cybersecurity forever. As time goes on, they will keep getting more and more refined. Its a matter of question when it will reach the point of becoming your cybersecurity watchdog, tailored to your needs.

The rest is here:
The Role of AI and Machine Learning in Cybersecurity - Analytics Insight

Jobs of the Future: The Hottest Areas of Tech Education Dallas Innovates – dallasinnovates.com

Dallas-Fort Worth has more than 40 higher learning institutions, each with programs that incorporate on-the-job training and monitor the pulse of our rapidly changing technological landscape.

While North Texas houses tons of major corporations, another factor that makes it so formidable is the hub of learning establishments that have long called the area home. Fortunately, with so many local universities, North Texas has fostered a collection of pioneer programs in high-level facilities that monitor the pulse of our rapidly changing technological landscape.

These programs create an employment pipeline that funnels the best and brightest to top companies across the stateand beyond. Heres a few of them.

At the University of Texas at Dallas, the Center for Applied AI and Machine Learning, within the Department of Computer Science, focuses on applying artificial intelligence and machine learning to create viable industry solutions and educate the next generation of scientists. Upon graduation, students often go into finance, logistics, healthcare, or software security and companies where they end up might include Amazon, Facebook, Google, Microsoft, AT&T, JPMorgan Chase, Samsung, Dell, and many others, says Sriraam Natarajan, associate professor and director of the Center for Machine Learning.

While the Center for Applied AI and Machine Learning functions as a service arm that provides opportunities for students, the Center for Machine Learning functions as a research arm, facilitating faculty members development of algorithms while supporting educational activity and community outreach.

We have a great faculty, and on top of that, were expanding the program, Natarajan says. Were looking at the most recent trends in machine learning, and aligning our courses so that they match what the industry needs.

The Department of Information Science at UNT, [emailprotected], and the Department of Computer Science & Engineering at UTA also offer courses, specializations, and concentrations in machine learning and artificial intelligence that help train students for cutting-edge jobs including software engineering, data engineering, application development, and programming.

Software engineering is one field that opens doors in nearly every industry across the globe.

Every company is a tech company, says Duane Dankesreiter, Senior Vice President of Research and Innovation for the Dallas Regional Chamber. You may be a big bank, but you need data analytics and software engineering to succeed.

At the University of Texas at Arlington, software engineering students collaborate with local companies to hone their skills for the future while, at the University of North Texas, students test their mettle by solving practical problems. The Erik Jonsson School of Engineering and Computer Science at the University of Texas at Dallas is ranked eighth in the country for software engineering research, and boasts cutting-edge forays into semiconductor design, wireless networking, organic electronics, and medical imaging. The schools internship and cooperative education program places 12,000 students at local tech companies annually.

With the modern world inextricably linked to technology, information security will only become more important as the future races forward and these concerns are top-of-mind for everyone from corporate leaders to everyday individuals. The Cyber Security Research and Education Institute at the University of Texas at Dallas is poised to address these concerns by conducting advanced cybersecurity research and providing inclusive education and training to enable the next generation of cyber security professionals to respond to the cyber threats of tomorrow. Additionally, faculty at Texas A&M University-Commerce is working to solve issues and concerns that revolve around resilience, risk awareness, and cyber-physical security.

Art and tech overlap more with each passing day. SMUs Simmons School of Education and Human Development utilizes augmented reality and virtual reality in its teaching projects. The University of Texas at Dallas houses one of the few motion capture and virtual reality laboratories in the country, and its applications range from gaming to military training scenarios to education and medical research. At UTDs ATEC School, students incorporate AR and VR into their capstone projects, like the fashion photographer who created an interactive magazine. The photos were digitally coated, and readers could use their phones to access video footage of the fashion models discussing their #metoo moments.

Augmented reality is emerging as a new and popular media form, Balsamo says. It involves new technologies, so our faculty and students are investigating how we can use AR in interesting ways that help us as human beings.

A pressing need for medical advancements in various disciplines is an expected side effect of a climbing population and across-the-board increased life expectancy. The University of Texas Southwestern Medical Center conducts research across a variety of fields, including cancer, heart disease, and neuroscience. With an award-winning staff, more than 200 laboratories, and annual funding of around $470 million, the Center trains around 3600 health professionals every year.

In the Biology Department at Texas Womens University, researchers conduct pioneering investigations into pain management, and at the University of North Texass Bio Discovery Institute, researchers work with bio-based materials to discover their potential applications in construction, transportation, and healthcare. The UNT Health Science Center contains six schools that tackle forward-looking disciplines, like forensic genetics and Alzheimers research. Notable is UNTHSCs School of Medicine, a joint collaboration with Texas Christian University that aims to create empathetic, globally conscious medical leaders.

A version of this story was originally published in Dallas Innovates: The [Tech] Talent Issue.

Dallas Innovates: The [Tech] Talent Issue, a special edition of the Dallas Innovates Magazine, looks at how companies in Dallas-Fort Worth are attracting and retaining the best talent. Startups, corporates, nonprofits, and organizations work hard to create a strong culture, promote diversity, and implement training programs that can help achieve success.

Sign up to keep your eye on whats new and next in Dallas-Fort Worth, every day.

Musings on innovation from the region's paradigm-shifting companies and organizations.

A new generation of innovators is taking its place in Dallas-Fort Worth lore, creating the next wave of great companies, services, and ideas.

No matter which of our six buckets you look atstartup, enterprise, invention, education, innovation, or creativeyou're sure to find leaders capable of disrupting industries, benefiting society, or changing the world.

Dallas Innovates, Every Day: Here's your briefing on ideas and innovation in North Texas.

From Toyota's city of the future to Ericsson's connected cars, AT&T's 5G phone lineup to Polte's tracking devices, the region was well-represented at this year's annual Consumer Electronics Show.

Continue reading here:
Jobs of the Future: The Hottest Areas of Tech Education Dallas Innovates - dallasinnovates.com

Learning Automated Trading Can Give You a Major Investing Advantage – The Advocate

Learning Automated Trading Can Give You a Major Investing Advantage

Technology has changed everything, including the way people invest. There is always risk inherent in investing but fin-tech like quantitative and algorithmic trading can make life a little easier on investors who have the technical expertise to get a competitive advantage. Whether you're a regular investor or interested in starting out, you owe it to yourself to learn some of the fin-tech that's changing the industry, and QuantInsti: Quantitative Trading for Beginners Bundle can help.

In this seven-course bundle, you'll get a comprehensive fin-tech education. You'll start with an introduction to algorithmic trading, that is, programming a computer to take certain trading actions in response to market data. From there, you'll learn how to use machine learning tools like Python to automate your trading to limit your losses and maximize your gains. You'll even get access to an Interactive brokers platform to practice automating your trading and learn momentum trading skills for forex markets. By the end of the training, you'll be fully ready to trade on your own or ace a quant Interview to work for someone else.

Start investing like a modern genius. Sold separately, the courses of QuantInsti: Quantitative Trading for Beginners Bundle would go for over $500, but you can get them all for just $49 today.

Related:Learning Automated Trading Can Give You a Major Investing Advantage3 Bad Investing Habits You Should Drop Before It's Too LateMake Smarter Investment Decisions with These Courses

View original post here:
Learning Automated Trading Can Give You a Major Investing Advantage - The Advocate

Databricks opens major engineering centre in Toronto why that’s a big deal for Canada – IT World Canada

In todays world where the competition is fierce for talent, it says a lot when your country is selected for opening a major engineering centre. It says, even more, when that company is a global leader in bringing the power or AI and machine learning to the enterprise. As such, upon hearing about Databricks coming to Canada, it sparked my interest to learn more.

Databricks is leading the charge for organizations to derive value out of AI and machine learning and is one of the fastest-growing SaaS companies in the world today. The next decade of innovation will combine the technology domains of cloud, data, and AI Databricks is sitting at the intersection of all three.

Databricks was founded in 2013 and has thousands of globalcustomersincluding Comcast, Shell, HP, Expedia, and Regeneron among many others across virtually every industry. Databricks is currently valued at over $6B with funding from leading investors like Andreessen Horowitz and NEA. To help bring the power of AI to the enterprise, Databricks also has hundreds of globalpartnersthat include Microsoft, Amazon, Tableau, Informatica, Cap Gemini and Booz Allen Hamilton.

Interestingly, you could say that Canada is actually embedded in the DNA of Databricks as the Co-founder and Chief Architect, Reynold Xin, is a University of Toronto alum. Reynold has BASc in Engineering from the U of T and holds a Ph.D from the University of California, Berkeley. Additionally, co-founder and chief technologist, Matei Zaharia, grew up in Toronto, went to the University of Waterloo and has a Ph.D. in Computer Science from the University of California, Berkeley. I connected with Reynold to gain further insight into the company, the Canada decision, and what the technical vision of the future may hold for AI and machine learning enabling organizations to make data-driven decisions from improved health outcomes to superior operational efficiency.

Brian Clendenin: For those that may not know, what is Databricks?

Reynold Xin: Databricks is a 6-year-old technology startup based in San Francisco. Our mission is to help data teams solve the worlds toughest problems, from security threat detection to cancer drug development. We do this by building and running the worlds best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.

The founding team were the original creators of Apache Spark. We worked on research problems in big data and machine learning at UC Berkeley. As part of that, we had a very close collaborative relationship with Silicon Valley, and saw some of the earlier use cases and challenges with data. We created Databricks with the belief that data has the potential to help solve some of the worlds toughest problems.

Fast forward six years, the company has evolved into a global organization with over 1000 employees and thousands of organizations entrust us with their most critical data infrastructure. Last year, we announced a $400 million Series F round of funding which valued the company at $6.2 billion USD.

Brian: Why select Canada to open a global engineering centre?

Reynold: Our secret sauce is the people at Databricks. We want to find the most talented and motivated people and create success collectively. We started in the San Francisco Bay Area, which has the highest concentration of software engineers. But the demand for our platform is so large that we need to grow the team substantially.

As part of our quest for talent, we opened our European Development Center in Amsterdam three years ago. The Amsterdam office has become an integral part of the Databricks innovation factory. They have shipped some of the highest impact features that made our customers life so much better.

Earlier this year, we decided its time to repeat the success we had seen with Amsterdam, and set out to find our third engineering hub. This time, we started with the following criteria:

It wasnt that difficult to narrow it down to Toronto, especially considering two of the founders have ties to Toronto. Matei grew up in Toronto, and I went to college at U of T.

Brian: How do you envision Canadians will contribute to Databricks innovation and market leadership?

Reynold: Throughout modern history, Canadians have played a critical part in the invention of new technologies, from medicine to more recently information technology. But at the same time, theres also a large brain drain of Canadians going south to the United States, often for better pay or better work.

We want to create an awesome environment in Toronto so the most talented engineers can work on the cutting edge technologies that have massive real-life impacts. They should wake up every day eager to come into work, knowing that the technologies they are building have contributed to fundamental societal issues such as reducing traffic congestion or curing cancer.

It is what they will be building that will define the next decade for Databricks, as part of our goal to enable every organization to leverage data and solve the toughest problems.

In Amsterdam, in addition to hiring a lot locally, weve also attracted some of the best engineers in other parts of the world and convinced them to move to the Netherlands. I think we will be able to help Toronto attract this calibre of people over as well.

Brian: Youve mentioned that Databricks is at the intersection of cloud computing, big data, and machine learning. Will these technology domains be the big drivers of innovation over the next decade?

Reynold: Absolutely, and Databricks is uniquely positioned at the intersection of these 3 megatrends. When we first started the company, we decided we wanted to build a cloud data platform that has diverse capabilities including machine learning. Most companies back then, and even now, are focusing on on-prem shrinkwrap software and on data warehousing, without any capabilities to do machine learning. Many investors we talked to were very skeptical about our approach: although big data was already big, the concept of cloud computing and machine learning was nascent and the market was small.

In 2020, its clear all of them took off and became megatrends. Cloud computing enables the rapid delivery of software as a service and compute resources on demand. This can create massive cost savings for IT infrastructure, but the real reason Im super excited about it is that it could shorten time-to-market for new applications our customers are developing from years to days.

As you know the field of machine learning isnt new, but whats completely new is the abundance of data available at our fingertips to train and apply state-of-the-art models. These models in return can help considerably enhance customer experiences, products, and help drive positive business outcomes. However, without computing power, without the ability to scale, processing big data or training machine learning models on big data becomes extremely challenging.

So it truly is the combination of the cloud, big data, and machine learning technologies combined will drive massive innovations over the next decade. And thats what we have been focusing on.

Brian: What is the promise of AI and machine learning in the enterprise?

Reynold: The promise of AI in the enterprise is massive.For the past three decades, data warehouses have become a standard component in any enterprise IT architecture. Those allow enterprises to look into the past, understanding how their businesses are doing. Thats obviously tremendously important and is phase 1 of the revolution.

We are on the verge of starting phase 2 with AI: look into (predict) the future.

Why is this important? Imagine what enterprises can do if they have a crystal ball into the future. To give you some examples. We have been working with Bechtel to reinvent the construction industry leveraging AI to sequence the complex dependency graph in billion-dollar construction projects. Weve worked with Regeneron in accelerating drug discovery, and Quby in helping homeowners reduce energy consumption.

However, few organizations have succeeded so far due to many challenges like infrastructure limitation, poor data quality, or challenges hiring qualified workforce in that space. We believe our technology can uniquely help solve many of the technical challenges, and we continue to add groundbreaking innovation to the platform based on customer needs. We partner with hundreds of ISVs and technology providers to allow customers to leverage their investments and for example, connect their existing infrastructure to the Databricks platform. In addition, we have and continue to scale as an organization, and our customer success and support organization work very closely with thousands of customers worldwide to help their data teams innovate faster.

Brian: What type of software engineering talent is optimal for Databricks?

Reynold: We are hiring software engineers from all subareas of computer science, from cloud infrastructure, databases, distributed systems, developer tooling, to machine learning. Our engineers are recognized by their peers outside Databricks as the top engineers, but at the same time are extremely collaborative and customer-obsessed. That means they tend to care a lot more about the impact of what they have created on our customers, rather than the creation process itself. We also emphasize own it a lot as a cultural principle. People are here on a mission and they are willing to do whatever it takes to drive projects end to end. When something is not going well, they dont spend energy blaming somebody else, but rather focusing on finding a solution.

Brian: What do you find most exciting about the future for Databricks?

Reynold: Of course one of the most exciting parts is the growth of the company. We have become one of the fastest-growing SaaS companies ever created, and it will be terrific to see the next phases of growth.

What I find even more exciting than the growth itself is I wake up every day learning new use cases that our platform has enabled our customers to do. We already discussed some very interesting ones that have already created a large impact, but I believe the best is yet to come. Perhaps one way we will indeed receive an email from a major pharmaceutical company or a university research lab that some data analysis and machine learning done on our platform has led to the creation of a new drug that cures cancer. We are really lucky that we are solving intellectually challenging technical problems every day, and those solutions are helping create a better world.

Excerpt from:
Databricks opens major engineering centre in Toronto why that's a big deal for Canada - IT World Canada

Machine Vision is Key to Industry 4.0 and IoT – ReadWrite

Machine vision joins machine learning in a set of tools that gives consumer- and commercial-level hardware unprecedented abilities to observe and interpret their environment. In an industrial setting, these technologies, plus automation and higher-speed networking, add up to a new industrial revolution Industry 4.0. They also offer brand-new ways to conduct low-waste, high-efficiency industrial activities.

Machine vision affects manufacturing, drilling, and mining. Further benefits are found in freight and supply chain management, quality assurance, material handling, security, and a variety of other processes and verticals.

Machine vision is going to be everywhere before long, adding a critical layer of intelligence to the Internet of Things buildouts in the industrial world. Heres a look at how companies are already putting it to work.

Machine vision is a set of technologies that gives machines greater awareness of their surroundings. It facilitates higher-order image recognition and decision-making based on that awareness.

To take advantage of machine vision, a piece of industrial equipment uses high-fidelity cameras to capture digital images of the environment, or a workpiece. The images can be taken in an automated guided vehicle (AGV) or a robotic inspection station. From there, machine vision uses extremely sophisticated pattern recognition algorithms to make a judgment about its position, identity, or condition.

Several lighting sources are common in machine vision applications, including LEDs, quartz halogen, metal halide, xenon, and traditional fluorescent lighting. If part of a barcode or workpiece is shadowed, the reading might deliver an error when there isnt one, or vice versa.

Machine vision combines sophisticated hardware and software to allow machines to observe and react to outside stimuli in new and beneficial ways.

The proliferation of Industrial Internet of Things (IIoT) devices marks an important moment in technological advancement. IIoT gives businesses unprecedented visibility of their operations from top to bottom. Networked sensors and cloud-based enterprise and resource planning hubs provide two-way data mobility between local and remote assets, as well as business partners.

The two-way mobility can be something as small as a mechanical piston or bearing. It can also be as large as a fleet of trucks, can yield valuable operational data with the right IoT hardware and software. Businesses can have their eyes everywhere, even when theyre strapped for resources or labor.

Where does machine vision fit into all this? Machine vision makes existing IoT assets even more powerful and better able to deliver value and efficiency. We can expect it to create some brand-new opportunities.

Machine vision makes sensors throughout the IoT even more powerful and useful. Instead of providing raw data, sensors deliver a level of interpretation and abstraction that can be used in decision-making or further automation.

Machine vision may help reduce the bandwidth requirements of large-scale IoT buildouts. Compared with capturing images and data at the source and sending it to servers for analysis, machine vision typically performs its research at the source of the data. Modern industry generates millions of data points, but a great deal of it can yield actionable insights without requiring transmission to a secondary location, thanks to machine vision and edge computing.

Machine vision complements IoT automation technologies extremely well. Robotic inspection stations can work more quickly and accurately than human QA employees, and they immediately surface relevant data for decision-makers when defects and exceptions are detected.

Guidance systems built with machine vision give robots and cobots greater autonomy and pathfinding abilities, and help them work faster and more safely alongside human workers. In warehouses and other settings with a high risk of error, machine vision helps robotic order pickers improve response time and limit fulfillment defects that result in lost business.

Todays and tomorrows economy requires companies and industries that operate while wasting far less time, material, and labor. Machine vision will continue to make drones, material handling equipment, unmanned vehicles and pallet trucks, manufacturing lines, and inspection stations better able to exchange detailed and valuable data with the rest of the network.

In a factory setting, it means machines and people working in better harmony with fewer bottlenecks, overruns, and other disruptions.

When you think about each of the steps involved in a typical industrial process, its not hard to see each point where machine vision can improve operations.

To manufacture a single automotive part, humans and machines collaborate to source raw materials, appraise their quality, transport them to a plant for processing, and move the items through the facility at each manufacturing stage. Ultimately, they see it successfully through the QA process and then out the door again, where at least one last leg of its journey awaits. At some later time, the retailer or end-user receives it.

Whether this product is at rest, in transit, or not even assembled yet, machine vision provides a way to automate the handling of it. It improves efficiency in every department, such as assembly, and maintains higher and more consistent quality levels.

Some applications are as simple as placing a line on a warehouse floor for an unmanned vehicle to follow safely. Other machine vision tools are even more sophisticated, although even the simplest examples can be game-changers.

Some of the most exciting examples of machine vision in the industrial world involve tasks once thought difficult or impossible to outsource to robots. As mentioned, picking from bins in warehouses is a process thats inherently risky when it comes to errors. Mistakes in fulfillment cost goodwill and customers.

There are already nearly 100% autonomous order-picking robots available today, which can navigate safely, inspect parts and products in the bin, make the right pick using a manipulator arm, and transport the pick to a staging or packaging area.

Ultimately, this means companies are at a far lesser risk of shipping damaged goods or incorrect SKUs that look similar to, but dont quite match, the one the customer ordered.

In some modern manufacturing settings, it can help employers automate and improve results from the QA process, even without sacrificing human jobs. Instead, automated inspection stations tackle this high-priority work while employees learn more cognitively demanding skills.

Cobots will likely achieve a 34% share of all robotics sales by 2025. This is due in large part to improvements in machine vision and the drive to eliminate as much inefficiency, inaccuracy, and waste from the modern industry as possible.

Expect machine vision to continue to evolve in the coming years and contribute further to Industry 4.0, which many call the Fourth Industrial Revolution. Eyes are already trained on newer, lower-cost products featuring embedded and board-level image processing with machine vision capabilities.

Machine vision capabilities will lead to even more widespread adoption of the IoT and machine vision and new ways for businesses to capitalize on digital intelligence.

Featured Image Credit: HAHN Group, CC BY-SA

Megan Ray Nichols is a freelance technical writer and blogger. She enjoys writing easy to understand science and technology articles on her blog, Schooled By Science. When she isn't writing, Megan enjoys watching movies and hiking with friends.

Go here to see the original:
Machine Vision is Key to Industry 4.0 and IoT - ReadWrite

Top Machine Learning Services in the Cloud – Datamation

Machine Learning services in the cloud are a critical area of the modern computing landscape, providing a way for organizations to better analyze data and derive new insights. Accessing these service via the cloud tends to be efficient in terms of cost and staff hours.

Machine Learning (often abbreviated as ML) is a subset of Artificial Intelligence (AI) and attempts to 'learn' from data sets in several different ways, including both supervised and unsupervised learning. There are many different technologies that can be used for machine learning, with a variety of commercial tools as well as open source framework.s

While organizations can choose to deploy machine learning frameworks on premises, it is typically a complex and resource intensive exercise. Machine Learning benefits from specialized hardware including inference chips and optimized GPUs. Machine Learning frameworks can also often be challenging to deploy and configure properly. Complexity has led to the rise of Machine Learning services in the cloud, that provide the right hardware and optimally configured software to that enable organizations to easily get started with Machine Learning.

There are several key features that are part of most machine learning cloud services.

AutoML - The automated Machine Learning feature automatically helps to build the right model.Machine Learning Studio - The studio concept is all about providing a developer environment where machine learning models and data modelling scenarios can be built.Open source framework support - The ability to support an existing framework such as TensorFlow, MXNet and Caffe is important as it helps to enable model portability.

When evaluating the different options for machine learning services in the cloud, consider the following criteria:

In this Datamation top companies list, we spotlight the vendors that offer the top machine learning services in the cloud.

Value proposition for potential buyers: Alibaba is a great option for users that have machine learning needs where data sets reside around the world and especially in Asia, where Alibaba is a leading cloud service.

Value proposition for potential buyers: Amazon Web Services has the broadest array of machine learning services in the cloud today, leading with its SageMaker portfolio that includes capabilities for building, training and deploying models in the cloud.

Value proposition for potential buyers: Google's set of Machine Learning services are also expansive and growing, with both generic as well as purpose built services for specific use-cases.

Value proposition for potential buyers: IBM Watson Machine learning enables users to run models on any cloud, or just on the the IBM Cloud

Value proposition for potential buyers: For organizations that have already bought into Microsoft Azure cloud, Azure Machine Learning is good fit, providing a cloud environment to train, deploy and manage machine learning models.

Value proposition for potential buyers: Oracle Machine learning is a useful tools for organizations already using Oracle Cloud applications, to help build data mining notebooks.

Value proposition for potential buyers: Salesforce Einstein is a purpose built machine learning platform that is tightly integrated with the Salesforce platform.

Read the original here:
Top Machine Learning Services in the Cloud - Datamation

In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak – Machine Learning Times – machine learning & data science news – The…

By: Casey Ross, National Technology Correspondent, StatNews.com

Surveillance data collected by healthmap.org show confirmed cases of the new coronavirus in China.

Artificial intelligence is not going to stop the new coronavirus or replace the role of expert epidemiologists. But for the first time in a global outbreak, it is becoming a useful tool in efforts to monitor and respond to the crisis, according to health data specialists.

In prior outbreaks, AI offered limited value, because of a shortage of data needed to provide updates quickly. But in recent days, millions of posts about coronavirus on social media and news sites are allowing algorithms to generate near-real-time information for public health officials tracking its spread.

The field has evolved dramatically, said John Brownstein, a computational epidemiologist at Boston Childrens Hospital who operates a public health surveillance site called healthmap.org that uses AI to analyze data from government reports, social media, news sites, and other sources.

During SARS, there was not a huge amount of information coming out of China, he said, referring to a 2003 outbreak of an earlier coronavirus that emerged from China, infecting more than 8,000 people and killing nearly 800. Now, were constantly mining news and social media.

Brownstein stressed that his AI is not meant to replace the information-gathering work of public health leaders, but to supplement their efforts by compiling and filtering information to help them make decisions in rapidly changing situations.

We use machine learning to scrape all the information, classify it, tag it, and filter it and then that information gets pushed to our colleagues at WHO that are looking at this information all day and making assessments, Brownstein said. There is still the challenge of parsing whether some of that information is meaningful or not.

These AI surveillance tools have been available in public health for more than a decade, but the recent advances in machine learning, combined with greater data availability, are making them much more powerful. They are also enabling uses that stretch beyond baseline surveillance, to help officials more accurately predict how far and how fast outbreaks will spread, and which types of people are most likely to be affected.

Machine learning is very good at identifying patterns in the data, such as risk factors that might identify zip codes or cohorts of people that are connected to the virus, said Don Woodlock, a vice president at InterSystems, a global vendor of electronic health records that is helping providers in China analyze data on coronavirus patients.

To continue reading this article click here.

View original post here:
In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak - Machine Learning Times - machine learning & data science news - The...