AI Weekly: Animal Crossing, ICLR, and the future of research conferences online – VentureBeat

This week, the worlds machine learning community got a good look at what digital research conferences will look like in a post-coronavirus future, as ICLR kicked off whats believed to be the first major AI research conference held entirely online. The conference was initially scheduled to be held in person in Addis Ababa, Ethiopia. The Computer Vision and Pattern Recognition (CVPR) conference is next month and will be partially or entirely digital, while ICML, one of the biggest annual AI research conferences in the world, will be held entirely online in July.

Also this week: An NLP researcher announced plans to host the first-ever AI workshop inside Nintendos Animal Crossing: New Horizons this July. Artie lead scientist Josh Eisenberg told VentureBeat that more than 200 people have already signed up to watch the day-long event. Animal Crossing: New Horizons launched March 20 and is now the best-selling game in the U.S. and third-best launch of any game in Nintendo history.

One major area of focus for each event is figuring out how to create social connections. Thats part of what motivated an Animal Crossing AI workshop.

I was talking to my fiance about social interactions and quarantine, and some of our deepest interactions with other people over the past couple of months have been via video games, specifically with our friends in Animal Crossing. So I wanted to apply that to work and research to see if we can combine an academic-style workshop with the social interactions of Animal Crossing, Eisenberg said.

By contrast, the International Conference on Learning Representations (ICLR) may be the largest AI conference to take place entirely online. Each of the 680 papers was presented by authors via a prerecorded 5- or 15-minute talk. Every video was accompanied by recommendations for similar papers, something one attendee suggested should become the standard for all machine learning research conferences. There was also a paper search bar and visualization showing how each paper relates to each other to group similar works and make it easier to spot major areas of interest.

Many people are pondering the best ways to host digital events. Its something VentureBeat and other media brands are thinking about a lot internally as well. This week, VB ran its annual GamesBeat Summit event entirely online, and Transform, VBs annual AI conference, will take place online in July.

Three members of the VentureBeat AI team attended ICLR to check out innovative research in GANs like U-GAT-IT, NLP likeReformer, and neurosymbolic AI Iike Clevrer, as well as workshops on topics like climate change, affordable health care, and machine translation for African languages.

The entire weeks worth of ICLR keynote addresses and workshops were pre-recorded and available on the first day of the conference, so attendees could binge watch them or peruse them throughout the week. The advantage of tuning in to a given talk or workshop at its appointed time on the schedule was to get access to live Q&A with speakers and participate in the live chats that accompanied each session.

It appears theres no simple way to recreate the busy floor of a conference poster session today. Instead, each author attended live Q&A sessions with colleagues. A remarkable amount of online feedback was positive, but one ICLR attendee told VentureBeat they found some lack of participation in poster sessions. Since ICLR organizers only chose to convert to an all-digital conference last month, maybe posters sessions attendance improves when attendees know they can schedule time to talk about novel work or meet a favorite research author.

By going entirely digital, ICLR more than doubled participation from 2,700 in 2019 to 5,600 people from nearly 90 countries. There were more than 1,400 speakers and one million page views, and videos were watched more than 100,000 times, organizing committee general chair Sasha Rush wrote in a tweet.

Cutting the expense is another major plus. The Animal Crossing conference will run you the cost of Animal Crossing: New Horizons ($60) and a Nintendo Switch ($299). ICLR was $100 down from the $500 of an in-person ticket cost. Add in the cost of a plane ticket to Ethiopia, lodging, and meals, and the physical conference could have cost upwards of $2,000.

But a challenge for digital events going forward, it seems, is finding ways to connect people.

To try to re-create social gatherings, ICLR held social gatherings inside Zoom to discuss topics ranging from deep generative models to open source tools and risky research. Affinity groups like Latinx in AI and Queer in AI also held digital get-togethers.

A Medium post by a researcher who described social challenges includes comments from Rush, who said creating a flow at the conference akin to the in-person feel of flowing between posters and hallway chats was a challenge. I dont think we have totally figured that out yet, but it was fun trying to recreate virtual versions of these interactions, he said.

In a blog post in February, ICLR organizer Yoshua Bengio one of the most cited researchers in the world urged the machine learning community to begin thinking about and experimenting with how to make digital conferences work in digital environments. Bengio, who gave a keynote address at ICLR this week and did live Q&A with Turing Award winner Yann LeCun, first insisted on more digital offerings as a way to cut down on the machine learning communitys carbon footprint.

Experiments should begin now, Bengio said, in part to address the challenge of recreating social experiences on par with meeting in person or striking a balance. Some events could combine in-person attendance with digital, while conferences rapidly growing in popularity like NeurIPSmay host more regional events.

Invited speakers at the Animal Crossing workshop will present their work on one Animal Crossing island, but to inject impromptu meetings into the process, in between presentations attendees will get Dodo codes to go to coffee break islands to chat with about 5 people. Each island will have themes for people with similar interests, like computer vision or NLP, to discuss workshop subject matter or whatever else comes to mind. If 200 people show up, he expects to need around 50 islands. He said keeping track of attendee Dodo codes for interest-specific islands may be one of the biggest technical challenges to pulling off an Animal Crossing workshop.

Zoom, Twitch, Google Meet, or another service will be used to provide sound for speakers, as well as a way for people to watch presentations inside Animal Crossing whether or not they have the game.

Austin Parker organized theDeserted Island DevOps workshop, a similar event that took place in Animal Crossing Thursday and attracted thousands of viewers on Twitch. He told VentureBeat forming community to do things like organize hallway track side conversations was easy, but tasks like finding moderators in an extremely open community was a challenge.

Another potential challenge: Widespread Nintendo Switch shortages during the global COVID-19 pandemic.

Eisenberg said the Animal Crossing AI workshop (ACAI) is designed to be accessible to people who dont normally attend conferences and aims to be as fun as it is educational.

The draw for these conferences is the networking and the chance social interactions, not the raw knowledge. Like, a 20-minute presentation isnt going to teach you everything about an AI paper thats someone devoted a year of their life on thats not the point. Youre not there to go back to school, youre there to meet people and to talk and to share ideas in a social manner, so I think the social aspect of these things are the most important, he said.

To be clear: The Animal Crossing AI workshop and ICLR are very different kinds of gatherings. ICLR is organized by a group of machine learning community leaders. The Animal Crossing AI workshop is organized by one guy. But both demonstrate what online AI conferences, and indeed other scientific research communities, may look like.

The format for research conferences online is important because the work of scientific communities in machine learning and fields like health care and life sciences can influence which ideas and methods gain traction.

What ICLR and ACAI have in common is they reflect a collective need to connect and learn.

Nothing replaces being there. The ICLR conference, for example, was originally scheduled to take place in Ethiopia this year, and though African perspectives were put front and center in some conference content, theres no replacing being in that place. Being there also means you can stay in the moment. Keeping locked in for a week of online content can be tough.

It may be a long time before researchers have the pleasure of cramming into hotel conference rooms shoulder to shoulder again, but even if a cure for coronavirus arrives tomorrow, experiments in digital conference events particularly social aspects should continue as a way to increase access, lower barriers, and bring more people into the process.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Read more from the original source:
AI Weekly: Animal Crossing, ICLR, and the future of research conferences online - VentureBeat

Machine Learning Artificial intelligence Market 2020, Thrives the Growth at Impressive CAGR Over Forecast Period 2027 COVID-19 Impact on Global…

TheMachine Learning Artificial intelligenceMarketResearch Report offers an extensive analysis of key drivers, leading market players, key segments, and regions. Besides this, the experts have deeply studied different geographical areas and presented a competitive scenario to assist new entrants, leading market players, and investors to determine emerging economies. These insights offered in the report would benefit market players to formulate strategies for the future and gain a strong position in the global market.

Request For Sample Copy of Machine Learning Artificial intelligence Market Research Report

**The sample pages of this report is immediately accessible on-demand.**

Our Sample Report Covers-

The report covers present status and future prospects.

The report analyses market trends, size, and forecast in different geographically.

The report provides market competition overview among the Top companies.

The report provides a complete analysis of the current and emerging market trends and opportunities.

Top market players are profiled in this research study Are:AIBrain, Amazon, Anki, CloudMinds, Deepmind, Google, Facebook, IBM, Iris AI, Apple, Luminoso, and Qualcomm

Furthermore, the report also categorizes the Machine Learning Artificial intelligencemarket on the basis of types of products or services, application segments, end-user, regions, and others. Each of the segments growth is assessed along with their growth estimation in the forecast period. Also, the Machine Learning Artificial intelligencemarket provides a scrupulous study on sales volume, industry size, shares, demand & supply analysis, and value analysis of several firms together with segmental analysis, relating to important geographies.

The report begins with a brief introduction and market overview of the Machine Learning Artificial intelligenceIndustry followed by its market scope and size. Next, the report provides an overview of market segmentation such as type, application, and region. The drivers, limitations, and opportunities for the market are also listed, along with current trends and policies in the industry.

The report provides a detailed study of the growth rate of every segment with the help of charts and tables. Furthermore, various regions related to the growth of the market are analyzed in the report. These regions include the USA, Europe, Japan, China, India, South East Asia, Central, and South America, Middle East and Africa, Other Regions. Besides this, the research demonstrates the growth trends and upcoming opportunities in every region.

Analysts have revealed that the Machine Learning Artificial intelligencemarket has shown several significant developments over the past few years. The report offers sound predictions on market value and volume that can be beneficial for the market players, investors, stakeholders, and new entrants to gain detailed insights and obtain a leading position in the market. Additionally, the report offers an in-depth analysis of key market players functioning in the global Machine Learning Artificial intelligenceIndustry.

The research presents the performance of each player active in the Machine Learning Artificial intelligence. It also offers a summary and highlights the current advancements of each player in the market. This piece of data is a great source of study material for the investors and stakeholders interested in the market. In addition, the report offers insights on suppliers, buyers, and merchants in the market. Along with this, a comprehensive analysis of consumption, market share, and growth rate of each application is offered for the historic period.

The report clearly shows that the Machine Learning Artificial intelligenceIndustry has achieved remarkable progress since 2026 with numerous significant developments boosting the growth of the market. This report is prepared based on a detailed assessment of the industry by experts. To conclude, stakeholders, investors, product managers, marketing executives, and other experts in search of factual data on supply, demand, and future predictions would find the report valuable.

Some important key factors included in the report:

Summary of the Machine Learning Artificial intelligenceMarket major key players having major count in terms of end-user demands, restraining elements, revenue, sales, share & size.

Characteristics of Machine Learning Artificial intelligenceMarket including industry growth and restraining factors, the technological advancements, new upcoming growth opportunities, and emerging segments of the Machine Learning Artificial intelligenceMarket.

Other factors such as Machine Learning Artificial intelligenceMarket price, demand, supply, profit/loss, and the growth factor are broadly discussed in the market report.

Machine Learning Artificial intelligenceMarket size, share, growth factors analysis on regional and country level segments.

Market Trends, Drivers, Constraints, Growth Opportunities, Threats, Challenges, Investment Opportunities, and recommendations.

The Questions Answered by Pagers Market Report:

What are the Key Manufacturers, raw material suppliers, equipment suppliers, end-users, traders And distributors in Pagers Market?

What are Growth factors influencing Pagers Market Growth?

What are production processes, major issues, and solutions to mitigate the development risk?

What is the Contribution from Regional Manufacturers?

What are the Key Market segment, market potential, influential trends, and the challenges that the market is facing?

Explore Complete TOC/ Ask for Customization@https://www.reportsandmarkets.com/reports/global-machine-learning-artificial-intelligence-market-2019-by-company-regions-type-and-application-forecast-to-2024?utm_source=latestherald&utm_medium=15

**This report can be customized to meet the clients requirements. Please connect with our sales team, to customize your report according to research needs.**

Few Points FromTable of Content:

1 Study Coverage

2 Executive Summaries

3 Breakdown Data by Manufacturers

4 Breakdown Data by Type

5 Breakdown Data by Application

6 North America

7 Europe

8 Asia Pacific

9 Central & South America

Applications

10 Middle East and Africa

11 Company Profiles

12 Future Forecast

13 Market Opportunities, Challenges, Risks and Influences Factors Analysis

14 Value Chain and Sales Channels Analysis

15 Research Findings and Conclusion

16 Appendix

About us

Market research is the new buzzword in the market, which helps in understanding the market potential of any product in the market. This helps in understanding the market players and the growth forecast of the products and so the company. This is where market research companies come into the picture. Reports And Markets is not just another company in this domain but is a part of a veteran group called Algoro Research Consultants Pvt. Ltd. It offers premium progressive statistical surveying, market research reports, analysis & forecast data for a wide range of sectors both for the government and private agencies all across the world.

Contact Person

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Ph: +1-352-353-0818 (US)

View original post here:
Machine Learning Artificial intelligence Market 2020, Thrives the Growth at Impressive CAGR Over Forecast Period 2027 COVID-19 Impact on Global...

Machine Learning Tutorial for Beginners

What is Machine Learning?

Machine Learning is a system that can learn from example through self-improvement and without being explicitly coded by programmer. The breakthrough comes with the idea that a machine can singularly learn from the data (i.e., example) to produce accurate results.

Machine learning combines data with statistical tools to predict an output. This output is then used by corporate to makes actionable insights. Machine learning is closely related to data mining and Bayesian predictive modeling. The machine receives data as input, use an algorithm to formulate answers.

A typical machine learning tasks are to provide a recommendation. For those who have a Netflix account, all recommendations of movies or series are based on the user's historical data. Tech companies are using unsupervised learning to improve the user experience with personalizing recommendation.

Machine learning is also used for a variety of task like fraud detection, predictive maintenance, portfolio optimization, automatize task and so on.

In this basic tutorial, you will learn-

Traditional programming differs significantly from machine learning. In traditional programming, a programmer code all the rules in consultation with an expert in the industry for which software is being developed. Each rule is based on a logical foundation; the machine will execute an output following the logical statement. When the system grows complex, more rules need to be written. It can quickly become unsustainable to maintain.

Machine learning is supposed to overcome this issue. The machine learns how the input and output data are correlated and it writes a rule. The programmers do not need to write new rules each time there is new data. The algorithms adapt in response to new data and experiences to improve efficacy over time.

Machine learning is the brain where all the learning takes place. The way the machine learns is similar to the human being. Humans learn from experience. The more we know, the more easily we can predict. By analogy, when we face an unknown situation, the likelihood of success is lower than the known situation. Machines are trained the same. To make an accurate prediction, the machine sees an example. When we give the machine a similar example, it can figure out the outcome. However, like a human, if its feed a previously unseen example, the machine has difficulties to predict.

The core objective of machine learning is the learning and inference. First of all, the machine learns through the discovery of patterns. This discovery is made thanks to the data. One crucial part of the data scientist is to choose carefully which data to provide to the machine. The list of attributes used to solve a problem is called a feature vector. You can think of a feature vector as a subset of data that is used to tackle a problem.

The machine uses some fancy algorithms to simplify the reality and transform this discovery into a model. Therefore, the learning stage is used to describe the data and summarize it into a model.

For instance, the machine is trying to understand the relationship between the wage of an individual and the likelihood to go to a fancy restaurant. It turns out the machine finds a positive relationship between wage and going to a high-end restaurant: This is the model

When the model is built, it is possible to test how powerful it is on never-seen-before data. The new data are transformed into a features vector, go through the model and give a prediction. This is all the beautiful part of machine learning. There is no need to update the rules or train again the model. You can use the model previously trained to make inference on new data.

The life of Machine Learning programs is straightforward and can be summarized in the following points:

Once the algorithm gets good at drawing the right conclusions, it applies that knowledge to new sets of data.

Machine learning can be grouped into two broad learning tasks: Supervised and Unsupervised. There are many other algorithms

An algorithm uses training data and feedback from humans to learn the relationship of given inputs to a given output. For instance, a practitioner can use marketing expense and weather forecast as input data to predict the sales of cans.

You can use supervised learning when the output data is known. The algorithm will predict new data.

There are two categories of supervised learning:

Imagine you want to predict the gender of a customer for a commercial. You will start gathering data on the height, weight, job, salary, purchasing basket, etc. from your customer database. You know the gender of each of your customer, it can only be male or female. The objective of the classifier will be to assign a probability of being a male or a female (i.e., the label) based on the information (i.e., features you have collected). When the model learned how to recognize male or female, you can use new data to make a prediction. For instance, you just got new information from an unknown customer, and you want to know if it is a male or female. If the classifier predicts male = 70%, it means the algorithm is sure at 70% that this customer is a male, and 30% it is a female.

The label can be of two or more classes. The above example has only two classes, but if a classifier needs to predict object, it has dozens of classes (e.g., glass, table, shoes, etc. each object represents a class)

When the output is a continuous value, the task is a regression. For instance, a financial analyst may need to forecast the value of a stock based on a range of feature like equity, previous stock performances, macroeconomics index. The system will be trained to estimate the price of the stocks with the lowest possible error.

In unsupervised learning, an algorithm explores input data without being given an explicit output variable (e.g., explores customer demographic data to identify patterns)

You can use it when you do not know how to classify the data, and you want the algorithm to find patterns and classify the data for you

Type

K-means clustering

Puts data into some groups (k) that each contains data with similar characteristics (as determined by the model, not in advance by humans)

Clustering

Gaussian mixture model

A generalization of k-means clustering that provides more flexibility in the size and shape of groups (clusters

Clustering

Hierarchical clustering

Splits clusters along a hierarchical tree to form a classification system.

Can be used for Cluster loyalty-card customer

Clustering

Recommender system

Help to define the relevant data for making a recommendation.

Clustering

PCA/T-SNE

Mostly used to decrease the dimensionality of the data. The algorithms reduce the number of features to 3 or 4 vectors with the highest variances.

Dimension Reduction

There are plenty of machine learning algorithms. The choice of the algorithm is based on the objective.

In the example below, the task is to predict the type of flower among the three varieties. The predictions are based on the length and the width of the petal. The picture depicts the results of ten different algorithms. The picture on the top left is the dataset. The data is classified into three categories: red, light blue and dark blue. There are some groupings. For instance, from the second image, everything in the upper left belongs to the red category, in the middle part, there is a mixture of uncertainty and light blue while the bottom corresponds to the dark category. The other images show different algorithms and how they try to classified the data.

The primary challenge of machine learning is the lack of data or the diversity in the dataset. A machine cannot learn if there is no data available. Besides, a dataset with a lack of diversity gives the machine a hard time. A machine needs to have heterogeneity to learn meaningful insight. It is rare that an algorithm can extract information when there are no or few variations. It is recommended to have at least 20 observations per group to help the machine learn. This constraint leads to poor evaluation and prediction.

Augmentation:

Automation:

Finance Industry

Government organization

Healthcare industry

Marketing

Example of application of Machine Learning in Supply Chain

Machine learning gives terrific results for visual pattern recognition, opening up many potential applications in physical inspection and maintenance across the entire supply chain network.

Unsupervised learning can quickly search for comparable patterns in the diverse dataset. In turn, the machine can perform quality inspection throughout the logistics hub, shipment with damage and wear.

For instance, IBM's Watson platform can determine shipping container damage. Watson combines visual and systems-based data to track, report and make recommendations in real-time.

In past year stock manager relies extensively on the primary method to evaluate and forecast the inventory. When combining big data and machine learning, better forecasting techniques have been implemented (an improvement of 20 to 30 % over traditional forecasting tools). In term of sales, it means an increase of 2 to 3 % due to the potential reduction in inventory costs.

Example of Machine Learning Google Car

For example, everybody knows the Google car. The car is full of lasers on the roof which are telling it where it is regarding the surrounding area. It has radar in the front, which is informing the car of the speed and motion of all the cars around it. It uses all of that data to figure out not only how to drive the car but also to figure out and predict what potential drivers around the car are going to do. What's impressive is that the car is processing almost a gigabyte a second of data.

Machine learning is the best tool so far to analyze, understand and identify a pattern in the data. One of the main ideas behind machine learning is that the computer can be trained to automate tasks that would be exhaustive or impossible for a human being. The clear breach from the traditional analysis is that machine learning can take decisions with minimal human intervention.

Take the following example; a retail agent can estimate the price of a house based on his own experience and his knowledge of the market.

A machine can be trained to translate the knowledge of an expert into features. The features are all the characteristics of a house, neighborhood, economic environment, etc. that make the price difference. For the expert, it took him probably some years to master the art of estimate the price of a house. His expertise is getting better and better after each sale.

For the machine, it takes millions of data, (i.e., example) to master this art. At the very beginning of its learning, the machine makes a mistake, somehow like the junior salesman. Once the machine sees all the example, it got enough knowledge to make its estimation. At the same time, with incredible accuracy. The machine is also able to adjust its mistake accordingly.

Most of the big company have understood the value of machine learning and holding data. McKinsey have estimated that the value of analytics ranges from $9.5 trillion to $15.4 trillion while $5 to 7 trillion can be attributed to the most advanced AI techniques.

Read the original here:
Machine Learning Tutorial for Beginners

Machine Learning on AWS

Amazon SageMaker enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. It removes the complexity that gets in the way of successfully implementing machine learning across use cases and industriesfrom running models for real-time fraud detection, to virtually analyzing biological impacts of potential drugs, to predicting stolen-base success in baseball.

Amazon SageMaker Studio: Experience the first fully integrated development environment (IDE) for machine learning with Amazon SageMaker Studio, where you can perform all ML development steps. You can quickly upload data, create and share new notebooks, train and tune ML models, move back and forth between steps to adjust experiments, debug and compare results, and deploy and monitor ML models all in a single visual interface, making you much more productive.

Amazon SageMaker Autopilot: Automatically build, train, and tune models with full visibility and control, using Amazon SageMaker Autopilot. It is the industrys first automated machine learning capability that gives you complete control and visibility into how your models were created and what logic was used in creating these models.

See more here:
Machine Learning on AWS

How To Verify The Memory Loss Of A Machine Learning Model – Analytics India Magazine

It is a known fact that deep learning models get better with diversity in the data they are fed with. For instance, data in a use case related to healthcare data will be taken from several providers such as patient data, history, workflows of professionals, insurance providers, etc. to ensure such data diversity.

These data points that are collected through various interactions of people are fed into a machine learning model, which sits remotely in a data haven spewing predictions without exhausting.

However, consider a scenario where one of the providers ceases to offer data to the healthcare project and later requests to delete the provided information. In such a case, does the model remember or forget its learnings from this data?

To explore this, a team from the University of Edinburgh and Alan Turing Institute assumed that a model had forgotten some data and what can be done to verify the same. In this process, they investigated the challenges and also offered solutions.

The authors of this work wrote that this initiative is first of its kind and the only work that comes close is the Membership Inference Attack (MIA), which is also an inspiration to this work.

To verify if a model has forgotten specific data, the authors propose a Kolmogorov Smirnov (K-S) distance-based method. This method is used to infer whether a model is trained with the query dataset. The algorithm can be seen below:

Based on the above algorithm, the researchers have used benchmark datasets such as MNIST, SVHN and CIFAR-10 for experiments, which were used to verify the effectiveness of this new method. Later, this method was also tested on the ACDC dataset using the pathology detection component of the challenge.

The MNIST dataset contains 60,000 images of 10 digits with image size 28 28. Similar to MNIST, the SVHN dataset has over 600,000 digit images obtained from house numbers in Google Street view images. The image size of SVHN is 32 32. Since both datasets are for the task of digit recognition/classification, this dataset was considered to belong to the same domain. CIFAR-10 is used as a dataset to validate the method. CIFAR-10 has 60,000 images (size 32 32) of 10-class objects, including aeroplane, bird, etc. To train models with the same design, the images of all three datasets are preprocessed to grey-scale and rescaled to size 28 28.

Using the K-S distance statistics about the output distribution of a target model, said the authors, can be obtained without knowing the weights of the model. Since the models training data are unknown, few new models called the shadow models were trained with the query dataset and another calibration dataset.

Then by comparing the K-S values, one can conclude if the training data contains information from the query dataset or not.

Experiments have been done before to check the ownership one has over data in the world of the internet. One such attempt was made by the researchers at Stanford in which they investigated the algorithmic principles behind efficient data deletion in machine learning.

They found that for many standard ML models, the only way to completely remove an individuals data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. In a trade-off between efficiency and privacy, a challenge arises because algorithms that support efficient deletion need not be private, and algorithms that are private do not have to support efficient deletion.

Aforementioned experiments are an attempt to probe and raise new questions related to the never-ending debate about the usage of AI and privacy. The objective in these works is to investigate the idea of how much authority an individual has over specific data while also helping expose the vulnerabilities within a model if certain data is removed.

Check more about this work here.

comments

Read more:
How To Verify The Memory Loss Of A Machine Learning Model - Analytics India Magazine

Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company – insideBIGDATA

Tecton.ai emerged from stealth and formally launched with its data platform for machine learning. Tecton enables data scientists to turn raw data into production-ready features, the predictive signals that feed machine learning models. Tecton is in private beta with paying customers, including a Fortune 50 company.

Tecton.ai also announced $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia. Both Martin Casado, general partner at Andreessen Horowitz, and Matt Miller, partner at Sequoia, have joined the board.

Tecton.ai founders Mike Del Balso (CEO), Kevin Stumpf (CTO) and Jeremy Hermann (VP of Engineering) worked together at Uber when the company was struggling to build and deploy new machine learning models, so they createdUbers Michelangelo machine learning platform. Michelangelo was instrumental in scaling Ubers operations to thousands of production models serving millions of transactions per second in just a few years, and today it supports a myriad of use cases ranging from generating marketplace forecasts, calculating ETAs and automating fraud detection.

Del Balso, Stumpf and Hermann went on to found Tecton.ai to solve the data challenges that are the biggest impediment to deploying machine learning in the enterprise today. Enterprises are already generating vast amounts of data, but the problem is how to harness and refine this data into predictive signals that power machine learning models. Engineering teams end up spending the majority of their time building bespoke data pipelines for each new project. These custom pipelines are complex, brittle, expensive and often redundant. The end result is that 78% of new projects never get deployed, and 96% of projects encounter challenges with data quality and quantity(1).

Data problems all too often cause last-mile delivery issues for machine learning projects, said Mike Del Balso, Tecton.ai co-founder and CEO. With Tecton, there is no last mile. We created Tecton to empower data science teams to take control of their data and focus on building models, not pipelines. With Tecton, organizations can deliver impact with machine learning quickly, reliably and at scale.

Tecton.ai has assembled a world-class engineering team that has deep experience building machine learning infrastructure for industry leaders such as Google, Facebook, Airbnb and Uber. Tecton is the industrys first data platform that has been designed specifically to support the requirements of operational machine learning. It empowers data scientists to build great features, serve them to production quickly and reliably and do it at scale.

Tecton makes the delivery of machine learning data predictable for every company.

The ability to manage data and extract insights from it is catalyzing the next wave of business transformation, said Martin Casado, general partner at Andreessen Horowitz. The Tecton team has been on the forefront of this change with a long history of machine learning/AI and data at Google, Facebook and Airbnb and building the machine learning platform at Uber. Were very excited to be partnering with Mike, Kevin, Jeremy and the Tecton team to bring this expertise to the rest of the industry.

The founders of Tecton built a platform within Uber that took machine learning from a bespoke research effort to the core of how the company operated day-to-day, said Matt Miller, partner at Sequoia. They started Tecton to democratize machine learning across the enterprise. We believe their platform for machine learning will drive a Cambrian explosion within their customers, empowering them to drive their business operations with this powerful technology paradigm, unlocking countless opportunities. We were thrilled to partner with Tecton along with a16z at the seed and now again at the Series A. We believe Tecton has the potential to be one of the most transformational enterprise companies of this decade.

Sign up for the free insideBIGDATAnewsletter.

See more here:
Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company - insideBIGDATA

Artificial Intelligence Arrives at the Edge – Electronic Design

Research into artificial intelligence (AI) has made some mind-blowing strides, expanding the usefulness of computers. Machines can do certain tasks faster and more accurately than humans. A great example is the ILSVRC image classification contest using a type of AI called machine learning (ML). Back in 2012, AlexNet won this contest, being the first to use deep neural nets and GPUs for training. By 2015, ResNet-152 beat humans at classifying images (Fig. 1).

1. Machine learning has improved over the years, eventually exceeding the performance of people when it comes to image classification.

Other examples where computers are better than humans include games. Figure 2 summarizes a few examples where machines beat human champions along with a non-gaming case where humans are still better.

2. Machine learning isnt up to autonomous drivingyet. (Game Performance 1997-2019)

Clearly machine learning is providing some amazing new capabilities that are essential for applications such as the smart home, smart retail, the smart factory, and the smart city, but can be leveraged by a wide range of businesses today. This can be seen in the dramatic growth of ML cloud services available from providers such as Amazon AWS SageMaker, Microsoft Azure ML, and Google Cloud ML Engine.

Push for the Edge

Until recently, the focus for ML has centered around the cloud running huge centralized computer centers due to the vast compute and storage resources available. This is shifting rapidly to the edge for a number of reasons, including:

All of these factors together make the edge the obvious place to put ML processing for many applications. Thats why NXP announced the i.MX 8M Plus applications processor, claimed as the first i.MX applications processor with a dedicated machine-learning accelerator.

The i.MX 8M Plus uses the 14-nm FinFET process node technology for low power with high-performance and has a number of new features including dual-camera image signal processors (ISPs) that support either two low-cost HD camera sensors or one 4K camera sensor for face-, object-, and gesture-recognition ML tasks. It also integrates an independent 800-MHz Cortex-M7 for real-time tasks and low-power support, video encode and decode of H.265 and H.264, an 800-MHz HiFi4 DSP, and eight pulse-density-modulation (PDM) microphone inputs for voice recognition. Industrial IoT features include Gigabit Ethernet with time-sensitive networking (TSN), two CAN-FD interfaces, and ECC.

Helping to accelerate the machine-learning-at-the-edge trend, data scientists are optimizing specific algorithms for resource-constrained devices being deployed at the edge. MobileNet is an image classification algorithm developed by Google with a focus on high accuracy while significantly reducing the amount of compute resources needed.

Figure 3 shows the dramatic reduction in processing. Going from the VGG-16 model to the MobileNet v2 model reduces the amount of compute needed at the edge by 50X. This enables a resource-constrained hardware solution at the edge to do sophisticated ML processing.

3. Neural-network algorithms can be optimized for edge computing.

In comparison, running MobileNet v1 at the edge in a mobile phone is significantly faster than running it in the cloud. The difference is achieved by eliminating cloud network latency. Network latency can easily add between 200 ms and over 1.4 seconds, significantly delaying the response. The goal is a response under 100 msappearing instantaneous to the user (Fig. 4).

4. Moving machine learning to the edge delivers a faster and better user experience.

Figure 5 illustrates some of the many applications enabled by running machine learning at the edge.

5. The possibilities are almost limitless when taking advantage of machine learning at the edge.

As shown in Figure 6, each of these use cases needs a certain level of performance that determines what level of hardware is needed to run it.

6. Here are some machine-learning use cases and their performance based on the platform they run on.

It makes sense to run ML applications at the edge for the reasons already mentioned. However, a few more requirements must be met to have a successful deployment:

Ecosystem for ML Development: eIQTM

Breakthrough ML applications require a design and development ecosystem thats up to the task. Along those lines, NXP created the innovative eIQTM or Edge Intelligence tools environment, providing the tools a customer needs to deploy their ML technology. The eIQTM ML software-development environment (Fig. 7) includes inference engines and libraries leveraged from advances in open-source machine-learning technologies.

7. The eIQTM Machine Learning Development Environment supports all major machine-learning frameworks.

Deployed today across a broad range of advanced AI development applications, NXPs eIQ software brings together inference engines, neural-network compilers, and optimized libraries for easier, holistic system-level application development and machine-learning algorithm enablement on NXP processors. eIQ supports a variety of processing elements for ML including Arm Cortex-A and Cortex-M processors, GPUs (graphics processors), DSPs, and ML accelerators.

NXP has deployed and optimized these technologies, such as TensorFlow Lite, OpenCV, CMSIS-NN, Glow, and Arm NN, for popular RT and i.MX applications processors. These are accessed through the companys development environments for MCUXpresso and Yocto (Linux) to help provide seamless support for application development. eIQTM software is accompanied by sample applications in object detection and voice recognition, to provide a starting point for machine learning at the edge.

The eIQ Auto toolkit is a specialty component of the eIQ machine-learning software-development environment, providing an Automotive SPICE-compliant deep-learning toolkit for NXPs S32V2 processor family and advanced driver-assistance system (ADAS) development. This technology offers functional safety, supporting ISO 26262 up to ASIL-D, IEC 61508, and DO 178.

Edge Security: EdgeLockTM

Security at the edge is critical. Needed capabilities include a secure-boot trust anchor, on-chip cryptography, secure provisioning, mutual device authentication, secure device management, over-the-air (OTA) secure updates, and lifecycle management.

To support this, NXP created the EdgeLock portfolio, delivering secure elements, secure authenticator, and embedded security to application processors and microcontrollers. EdgeLock brings integrity, authenticity, and privacy to the edge node and provides security from the edge to the gateway and the cloud.

Affordable Edge AI

eIQTM brings ML to NXPs existing line of SoCs, leveraging the CPU, GPU, and DSP. However, even the fastest CPUs are inefficient at executing highly complex neural networks. Going forward, the company is creating a new family of hybrid AI SoCs combining a state-of-the-art embedded SoC with the latest in AI/ML hardware neural-processing-unit (NPU) technology for both application processors and microcontrollers. The result leverages existing SoC applications and adds the parallel compute power of an ML accelerator.

Future

The pace of change in the AI landscape is accelerating. Figure 8, from the AI Index 2018, shows the growth in deep-learning job openings, and Figure 9 illustrates the mentions of AI and machine learning on company earning calls.

8. Job openings that require AI skills has grown significantly (2015-2017). (Sources: Monster.com, AI Index 2018)

9. IT Company earnings calls that mention AI terms have risen significantly (2007-2017). (Sources: Prattle, AI Index 2018)

AI and machine learning are creating a seismic shift in the computer industry that will empower and improve our lives. Taking it to the edge will accelerate our path to a better tomorrow.

See the article here:
Artificial Intelligence Arrives at the Edge - Electronic Design

This 17-year-old boy created a machine learning model to suggest potential drugs for Covid-19 – India Today

In keeping with its tradition of high excellence and achievements, Greenwood High International School's student Tarun Paparaju of Class 12 has achieved the 'Grand Master' level in kernels, the highest accreditation in Kaggle, holding a rank of 13 out of 118,024 Kagglers worldwide. Kaggle is the world's largest online community for Data Science and Artificial Intelligence.

There are only 20 Kernel Grandmasters out of the three million users on Kaggle worldwide, and Tarun, aged 17 years, is honored to be one of the 20 Kernel Grandmasters now. Users of Kaggle are placed at different levels based on the quality and accuracy of their solutions to real-world artificial intelligence problems. The five levels in ascending order are Novice, Contributor, Expert, Master, and Grandmaster.

Kaggle hosts several data science competitions and contestants are challenged to find solutions to these real-world challenges. Kernels are a medium through which Kagglers share their code and insights on how to solve the problem.

These kernels include in-depth data analysis, visualisation, and machine learning, usually written in Python or R programming language. Other Kagglers can up vote a kernel if they believe it provides useful insights or solves the problem. 'Kernels Grandmaster' title at Kaggle requires 15 kernels qualified with gold medals.

Tarun's passion for Calculus, Mathematical modeling, and Data science from a very young age got him interested in participating and contributing to the Kaggle community.

He loves solving real-world Data Science problems, especially in the areas based on Deep learning: Natural language processing, Signal processing. Tarun is an open-source contributor to Keras, a Deep learning framework.

He has proposed and added Capsule NN layer support to Keras framework. He writes blogs about his adventures and learnings in data science.

Now, he closely works with the Kaggle community and aspires to be a scholar in the area of Natural language processing. Additionally, he loves playing cricket and football. Sports is a large part of his life outside Data science and academics.

Read:UGC releases new academic calendar: Here are top 10 important UGC updates

Read: MPhil, PhD students to get extension of 6 months, viva-voce through video conferencing: UGC

Read: WBBSE Madhyamik Result 2020: WB Class 10 result date to be fixed after Covid-19 lockdown ends

Original post:
This 17-year-old boy created a machine learning model to suggest potential drugs for Covid-19 - India Today

Rise in the demand for Machine Learning & AI skills in the post-COVID world – Times of India

The world has seen an unprecedented challenge and is battling this invisible enemy with all their might. The Novel coronavirus spread has left the global economies holding on to strands, businesses impacted and most people locked down. But while the physical world has come to a drastic halt or slow-down, the digital world is blooming. And in addition to understanding the possibilities of home workspaces, companies are finally understanding the scope of Machine Learning and Artificial Intelligence. A trend that was already gardening all the attention in recent years, ML & AI have particularly taken the centre-stage as more and more brands realise the possibilities of these tools. According to a research report released in February, demand for data engineers was up 50% and demand for data scientists was up 32% in 2019 compared to the prior year. Not only is machine learning being used by researchers to tackle this global pandemic, but it is also being seen as an essential tool in building a world post-COVID.

This pandemic is being fought on the basis of numbers and data. This is the key reason that has driven peoples interest in Machine Learning. It helps us in collecting, analysing and understanding a vast quantity of data. Combined with the power of Artificial Intelligence, Machine Learning has the power to help with an early understanding of problems and quick resolutions. In recent times, ML & AI are being used by doctors and medical personnel to track the virus, identify potential patients and even analyse the possible cure available. Even in the current economic crisis, jobs in data science and machine learning have been least affected. All these factors indicate that machine learning and artificial intelligence are here to stay. And this is the key reason that data science is an area you can particularly focus on, in this lockdown.

The capabilities of Machine Learning and Data Sciences One of the key reasons that a number of people have been able to shift to working from home without much hassle has to be the use of ML & AI by businesses. This shift has also motivated many businesses, both small-scale and large-scale, to re-evaluate their functioning. With companies already announcing plans to look at a more robust working mechanism, which involves less office space and more detailed and structured online working systems, the focus on Machine Learning is bound to increase considerably.

The Current PossibilitiesThe world of data science has been coming out stronger during this lockdown and the interest and importance given to the subject are on the rise. AI-powered mechanics and operations have already made it easier to manage various spaces with lower risks and this trend of turning to AI is bound to increase in the coming years. This is the reason that being educated in this field can improve your skills in this segment. If you are someone who has always been intrigued by data sciences and machine learning or are already working in this field and are looking for ways to accelerate your career, there are various courses that you can turn to. With the increased free time that staying at home has facilitated us with, beginning an additional degree to pad up your resume and also learn some cutting-edge concepts while gaining access to industry experts.

Start learning more about Machine Learning & AIIf you are wondering where to begin this journey of learning, a leading online education service provider, upGrad, has curated programs that would suit you! From Data Sciences to in-depth learnings in AI, there are multiple programs on their website that covers various domains. The PG Diploma in Machine Learning and AI, in particular, has a brilliant curriculum that will help you progress in the field of Machine Learning and Artificial Intelligence. A carefully crafted program from IIIT Bangalore which offers 450+ hours of learning with more than 10 practical hands-on capstone projects, this program has been designed to help people get a deeper understanding of the real-life problems in the field.

Understanding the PG Diploma in Machine Learning & AIThis 1-year program at upGrad has been articulated especially for working professionals who are looking for a career push. The curriculum consists of 30+ Case Studies and Assignments and 25+ Industry Mentorship Sessions, which help you to understand everything you need to know about this field. This program has the perfect balance between the practical exposure required to instil better management and problem-solving skills as well as the theoretical knowledge that will sharpen your skills in this category. The program also gives learners an IIIT Bangalore Alumni Status and Job Placement Assistance with Top Firms on successful completion.

Read the original here:
Rise in the demand for Machine Learning & AI skills in the post-COVID world - Times of India

Determined AI makes its machine learning infrastructure free and open source – TechCrunch

Machine learning has quickly gone from niche field to crucial component of innumerable software stacks, but that doesnt mean its easy. The tools needed to create and manage it are enterprise-grade and often enterprise-only but Determined AI aims to make them more accessible than ever by open-sourcing its entire AI infrastructure product.

The company created its Determined Training Platform for developing AI in an organized, reliable way the kind of thing that large companies have created (and kept) for themselves, the team explained when they raised an $11 million Series A last year.

Machine learning is going to be a big part of how software is developed going forward. But in order for companies like Google and Amazon to be productive, they had to build all this software infrastructure, said CEO Evan Sparks. One company we worked for had 70 people building their internal tools for AI. There just arent that many companies on the planet that can withstand an effort like that.

At smaller companies, ML is being experimented with by small teams using tools intended for academic work and individual research. To scale that up to dozens of engineers developing a real product there arent a lot of options.

Theyre using things like TensorFlow and PyTorch, said Chief Scientist Ameet Talwalkar. A lot of the way that work is done is just conventions: How do the models get trained? Where do I write down the data on which is best? How do I transform data to a good format? All these are bread and butter tasks. Theres tech to do it, but its really the Wild West. And the amount of work you have to do to get it set up theres a reason big tech companies build out these internal infrastructures.

Determined AI, whose founders started out at UC Berkeleys AmpLab (home of Apache Spark), has been developing its platform for a few years, with feedback and validation from some paying customers. Now, they say, its ready for its open source debut with an Apache 2.0 license, of course.

We have confidence people can pick it up and use it on their own without a lot of hand-holding, said Sparks.

You can spin up your own self-hosted installation of the platform using local or cloud hardware, but the easiest way to go about it is probably the cloud-managed version that automatically provisions resources from AWS or wherever you prefer and tears them down when theyre no longer needed.

The hope is that the Determined AI platform becomes something of a base layer that lots of small companies can agree on, providing portability to results and standards so youre not starting from scratch at every company or project.

With machine learning development expected to expand by orders of magnitude in the coming years, even a small piece of the pie is worth claiming, but with luck, Determined AI may grow to be the new de facto standard for AI development in small and medium businesses.

You can check out the platform on GitHub or at Determined AIs developer site.

Visit link:
Determined AI makes its machine learning infrastructure free and open source - TechCrunch