MIT researchers warn that deep learning is approaching computational limits – VentureBeat

Were approaching the computational limits of deep learning. Thats according to researchers at the Massachusetts Institute of Technology, MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia, who found in a recent study that progress in deep learning has been strongly reliant on increases in compute. Its their assertion that continued progress will require dramatically more computationally efficient deep learning methods, either through changes to existing techniques or via new as-yet-undiscovered methods.

We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive, the coauthors wrote. Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible.

Deep learning is the subfield of machine learning concerned with algorithms inspired by the structure and function of the brain. These algorithms called artificial neural networks consist of functions (neurons) arranged in layers that transmit signals to other neurons. The signals, which are the product of input data fed into the network, travel from layer to layer and slowly tune the network, in effect adjusting the synaptic strength (weights) of each connection. The network eventually learns to make predictions by extracting features from the data set and identifying cross-sample trends.

The researchers analyzed 1,058 papers from the preprint server Arxiv.org as well as other benchmark sources to understand the connection between deep learning performance and computation, paying particular mind to domains including image classification, object detection, question answering, named entity recognition, and machine translation. They performed two separate analyses of computational requirements reflecting the two types of information available:

The coauthors report highly statistically significant slopes and strong explanatory power for all benchmarks except machine translation from English to German, where there was little variation in the computing power used. Object detection, named-entity recognition, and machine translation in particular showed large increases in hardware burden with relatively small improvements in outcomes, with computational power explaining 43% of the variance in image classification accuracy on the popular open source ImageNet benchmark.

The researchers estimate that three years of algorithmic improvement is equivalent to a 10 times increase in computing power. Collectively, our results make it clear that, across many areas of deep learning, progress in training models has depended on large increases in the amount of computing power being used, they wrote. Another possibility is that getting algorithmic improvement may itself require complementary increases in computing power.

In the course of their research, the researchers also extrapolated the projections to understand the computational power needed to hit various theoretical benchmarks, along with the associated economic and environmental costs. According to even the most optimistic of calculation, reducing the image classification error rate on ImageNet would require 105more computing.

To their point, a Synced report estimated that the University of Washingtons Grover fake news detection model cost $25,000 to train in about two weeks. OpenAI reportedly racked up a whopping $12 million to train its GPT-3language model, and Google spent an estimated $6,912 trainingBERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

In a separate report last June, researchers at the University of Massachusetts at Amherst concluded that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide. Thats equivalent to nearly five times the lifetime emissions of the average U.S. car.

We do not anticipate that the computational requirements implied by the targets The hardware, environmental, and monetary costs would be prohibitive, the researchers wrote. Hitting this in an economical way will require more efficient hardware, more efficient algorithms, or other improvements such that the net impact is this large a gain.

The researchers note theres historical precedent for deep learning improvements at the algorithmic level. They point to the emergence of hardware accelerators like Googles tensor processing units, field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs), as well as attempts to reduce computational complexity through network compression and acceleration techniques. They also cite neural architecture search and meta learning, which use optimization to find architectures that retain good performance on a class of problems, as avenues toward computationally efficient methods of improvement.

Indeed, an OpenAI study suggests that the amount of compute needed to train an AI model to the same performance on classifying images in ImageNet has been decreasing by a factor of 2 every 16 months since 2012. Googles Transformer architecture surpassed a previous state-of-the-art model seq2seq, which was also developed by Google with 61 times less compute three years after seq2seqs introduction. And DeepMinds AlphaZero, a system that taught itself from scratch how to master the games of chess, shogi, and Go, took eight times less compute to match an improved version of the systems predecessor, AlphaGoZero, one year later.

The explosion in computing power used for deep learning models has ended the AI winter and set new benchmarks for computer performance on a wide range of tasks. However, deep learnings prodigious appetite for computing power imposes a limit on how far it can improve performance in its current form, particularly in an era when improvements in hardware performance are slowing, the researchers wrote. The likely impact of these computational limits is forcing machine learning towards techniques that are more computationally-efficient than deep learning.

See more here:
MIT researchers warn that deep learning is approaching computational limits - VentureBeat

Three Vietnamese papers accepted at the International Conference on Machine Learning – Nhan Dan Online

This is the first time a Vietnamese company has been ranked in the Top 30 contributing institutions at ICML 2020, shoulder to shoulder with the leading research centers of Apple, NEC and NTT.

The three accepted papers of VinAI Research focus on important issues in current AI research including the development of an optimal computational method to compare distributions from large data; in depth learning from key representations from image data, video for optimal control problems and proposals for effective deductive methods for complex non-linear dynamic neural systems.

The research on comparing distributions from large data is the basis for many machine learning algorithms, contributing to the promotion of unattended machine learning - one of the most relevant issues in artificial intelligence, computer vision, and natural language processing and understanding. Meanwhile, the research on data representation and nonlinear dynamic systems forms the basis of a breakthrough in the development of the automation of robots or self-driving cars.

This is the first time a Vietnamese company has been featured in the Top 30 contributing institutions at the International Conference on Machine Learning (ICML) which has always been attended by developed nations in terms of technology such as the USA, UK, China, Canada, etc.

The event not only marks the position of VinAI in the world technology community, but also shows its transformation into a leading technology corporation, gradually integrating and reaching a global technology peak.

The world has gradually become aware of Vietnam's AI research thanks to VinAI's efforts. We will continue to cooperate with leading research institutes and universities from across the world in order to build up a network of exchange and research to gradually bring the world's artificial intelligence closer to Vietnam - said Dr. Bui Hai Hung - Director of the VinAI Research Institute.

Previously, in December 2019, VinAI announced its first two scientific research papers at NeurIPS (Neural Information Processing Systems) - the annual international conference on artificial neural network information processing systems. Besides intensive research, VinAI Researchs engineering team is making every effort to develop high-quality AI core applications and technologies.

In May 2020, VinAI became one of the first companies in the world to successfully research face recognition technology when using masks.

The International Conference on Machine Learning (ICML) 2020 took place beginning July 12, 2020. The event was attended virtually by the world's leading experts in artificial intelligence and machine learning. With 40 years of organizational experience, ICML provides and publishes advanced research works on all aspects of machine learning. ICML, along with NeurIPS, is one the leading international academic conferences on artificial intelligence.

The rest is here:
Three Vietnamese papers accepted at the International Conference on Machine Learning - Nhan Dan Online

Altair introduces new version of its machine learning and predictive analytics solution – ETAuto.com

Available via Altairs units-based licensing model, the new version streamlines the entire workflow. At the outset, data is improved automatically by replacing missing values and dealing with outliers. New Delhi: Global technology company Altair on Thursday released a new version of Altair Knowledge Studio that is claimed to bring enhanced flexibility and transparency to data modeling and predictive analytics.

As per the release, the updated version of Knowledge Studio now employs automated machine learning (AutoML) to optimize the modeling process.

Available via Altairs units-based licensing model, the new version streamlines the entire workflow. At the outset, data is improved automatically by replacing missing values and dealing with outliers.

AutoML then builds and compares many different models to identify the best available option, said the company.

Altair said Knowledge Studio does not adopt a black box approach that shuts out users. Although models are developed automatically, explainable AI helps users in understanding, interpreting and evaluating the process.

Commenting on the new version, Sam Mahalingam, Altair chief technology officer said, "As a powerful solution that can be used by data scientists and business analysts alike, Knowledge Studio continues to lead the data science and machine learning market."

He further added, Without requiring a single line of code, Knowledge Studio visualizes data fast, and quickly generates explainable results.

Here is the original post:
Altair introduces new version of its machine learning and predictive analytics solution - ETAuto.com

These drones wont fly into one other, thanks to machine learning – DroneDJ

Engineers at Caltech have successfully designed a new method to control the movement of drones within a swarm to stop them from flying into one another. The new method relies on data to control the movement of the drones through cluttered unmapped spaces.

The team, led by Soon-Jo Chung and Yisong Yue with the help from Caltech graduates Benjamin Rivire, Wolfgang Hnig, and Guanya Shi, needed to take on two major challenges that arise when multiple drones are flying together..

The first is having the drones fly into a new environment for the first time and needing to make split second decisions to ensure they dont hit each other and obstacles surrounding them. The second is having multiple drones; the more drones flying, the less space available for each of them to maneuver around obstacles and one another.

The team was able to develop GLAS, aka Global-to-Local Safe Autonomy Synthesis, which means the drones dont need to have a picture of their surroundings before they commence flight. Rather, these drones generate their trajectory on the fly. The GLAS algorithm is used alongside Neural-Swarm, which learns the complex aerodynamic interactions in close-proximity flight.

Heres Soon-Jo Chung, Bren professor of aerospace at Caltech:

Our work shows some promising results to overcome the safety, robustness, and scalability issues of conventional black-box artificial intelligence (AI) approaches for swarm motion planning with GLAS and close-proximity control for multiple drones using Neural-Swarm.

Soon-Jo Chung, Bren professor of aerospace at Caltech

The team tested GLAS and the Neural-Swarm with 16 drones by flying them in an open arena at CaltechsCenter for Autonomous Systems and Technologies(CAST). The tests found that GLAS was able to outperform current algorithms by 20%, while the Neural-Swarm outperformed current controllers. Tracking errors were reduced by up to a factor of four.

Yisong Yue, professor of computing and mathematical sciences at Caltech, also commented on the GLAS system.

These projects demonstrate the potential of integrating modern machine-learning methods into multi-agent planning and control, and also reveal exciting new directions for machine-learning research.

What do you think about this method that ensures the drones dont crash into one another mid-air? Let us know your thoughts in the comments below.

Photo: Caltech

Subscribe to DroneDJ on YouTube for exclusive videos

Continued here:
These drones wont fly into one other, thanks to machine learning - DroneDJ

Why supervised learning is more common than reinforcement learning – VentureBeat

Watch all the Transform 2020 sessions on-demand right here.

Supervised learning is a more commonly used form of machine learning than reinforcement learning in part because its a faster, cheaper form of machine learning. With data sets, a supervised learning model can be mapped to inputs and outputs to create image recognition or machine translation models. A reinforcement learning algorithm, on the other hand, must observe, and that can take time, said UC Berkeley professor Ion Stoica.

Stoica works on robotics and reinforcement learning at UC Berkeleys RISELab, and if youre a developer working today, then youve likely used or come across some of his work that has built part of the modern infrastructure for machine learning. He spoke today as part of Transform 2020, an annual AI event hosted by VentureBeat that this year takes place online.

With reinforcement learning, you have to learn almost like a program because reinforcement learning is actually about a sequence of decisions to get a desired result to maximize a desired reward, so I think these are some of the reasons for greater adoption, he said. The reason we saw a lot of successes in gaming is because with gaming, its easy to simulate them, so you can do these trials very fast but when you think about the robot which is navigating in the real world, the interactions are much slower. It can lead to some physical damage to the robot if you make the wrong decisions. So yeah, its more expensive and slower, and thats why it takes much longer and is more typical.

Reinforcement learning is a subfield of machine learning that draws on multiple disciplines which began to coalesce in the 1980s. It involves an AI agent whose goal is to interact with an environment to learn a policy to maximize on a reward task. Achieving a task reward function reinforces what actions or policy the agent should follow.

Popular reinforcement learning examples include game-playing AI like DeepMinds AlphaGo and AlphaStar, which plays StarCraft 2. Engineers and researchers have also used reinforcement learning to train agents to learn how to walk, work together, and consider concepts like cooperation. Reinforcement learning is also applied in sectors like manufacturing, to help design language models, or even to generate tax policy.

While at RISELabs predecessor AMPLab, Stoica helped develop Apache Spark, an open source big data and machine learning framework that can operate in a distributed fashion. He is also creator of the Ray framework for distributed reinforcement learning.

We started Ray because we wanted to scale up some machine learning algorithms. So when we started Ray initially with distributed learning, we started to focus on reinforcement learning because its not only very promising, but its very demanding, a very difficult workload, he said.

In addition to AI research as a professor, Stoica also cofounded a number of companies, including Databricks, which he founded with other Apache Spark creators. Following a funding round last fall, Databricks received a $6.2 billion valuation. Other prominent AI startups cofounded by UC Berkeley professors include Ambidextrous Robotics, Covariant, andDeepScale.

Last month, Stoica joined colleagues in publishing a paper about Dex-Net AR at the International Conference on Robotics and Automation (ICRA). The latest iteration of the Dex-Net robotics project from RISELab uses Apples ARKit and a smartphone to scan objects, which data is then used to train a robotic arm to pick up an object.

See the rest here:
Why supervised learning is more common than reinforcement learning - VentureBeat

This Machine Learning-Focused VC Firm Just Added A Third Woman Investment Partner – Forbes

Basis Set Ventures investment partners Chang Xu, Lan Xuezhao and Sheila Vashee are looking to run a ... [+] different kind of venture capital firm.

Basis Set Ventures doesnt want to be your typical venture capital firm. First, theres the fledgling VC firms focus on a technical area that has seen some disillusionment in recent years: machine learning and artificial intelligence. Sure, AI has become something out of startup bingo, tacked on in pitches and often stretched behind meaning. Basis Set founder Lan Xuezhao is confident she and her team can figure out whats real and whats not. We want to transform the way people work, she says.

Basis Set is different in another meaningful way, too: a woman-led VC firm, it has recently operated with two women investment partners in Xuezhao and Chang Xu, a partner who joined the firm from Upfront Ventures last year. Now, Basis Set has added its third woman investment partner in Sheila Vashee, giving the firm three women at the top of its investment committee.

Vashee joins Basis Set from Opendoor, where she led the unicorns growth team, including marketing, partnerships, operations and some of its product. Before her two-and-a-half year stint at Opendoor, Vashee was an early employee at Dropbox, where she helped oversee marketing and the launches of its business product. At Dropbox she sat close to Xuezhao, who joined in 2013 and led corporate development before departing to found Basis Set in 2017.

In an interview, Vashee says she decided to join Basis Set in part because of its thesis; in part because of a culture that operates differently from the typical venture shop. I believe that theres going to be a new wave of work tools that really revolutionizes every industry on every level, and I want to build that future, Vashee says.

Given its self-imposed focus on companies utilizing machine learning and AI, Basis Set has to be selective in what companies it pursues. Like other investors that use data to attempt to find better deals, Basis Sets data science team studies companies leadership and launches to move fast when attractive fundraises are coming together, hoping that its speed and accessibility will allow it to join rounds pursued by the best-known VC firms. A network of technical advisors, meanwhile, is intended to evaluate what startups are really using machine learning and AI in their core software.

We want to be superhuman in the sense that our data science team builds the armor that makes us see better, see further, run faster and process a lot more deals and high-quality investments, says Xuezhao.

With 50,000 founders in its database, the partners at Basis Set hope they can evaluate more startups in more places, including those who might fall into the blind spots of traditional VC because founders dont have the typical background or base their company in Silicon Valley. That includes the partners training the firms algorithms hands-on. Every morning, if Im not doing anything, Im in my inbox, saying whether a company is good for us and label data myself. That makes the system better, Xuezhao says.

So far, Basis Sets approach has led to investments such as Workstream, a hiring platform; Rasa, which provides conversational AI tools to big businesses; and Ike, which offers automation tools to the trucking industry. (It also includes Lime, a business that may use data but is known better for its rental scooters.) Vashee brings perspective from collaboration software and real estate software given her background, but the partners say their focuses within Basis Set are flexible.

Basis Sets partners hope that they stand out because of their young firms culture, too. Vashee says that her experience and Xuezhaos as professional moms with kids at home during the current Covid-19 work-from-home environment helps them relate to some founders who might not connect as well to more traditional VCs, but who are going through many of the same things: toddlers getting sick, the need to take early evening breaks and then get business done late into the night after the familys asleep.

My kids are right outside the door screaming now, and that wouldnt work in a normal VC, Vashee says. But I think integrating every part of our lives makes us better at everything we do, and actually makes founders relate to us better.

See the original post:
This Machine Learning-Focused VC Firm Just Added A Third Woman Investment Partner - Forbes

How Machine Learning Will Impact the Future of Software Development and Testing – ReadWrite

Machine learning (ML) and artificial intelligence (AI) are frequently imagined to be the gateways to a futuristic world in which robots interact with us like people and computers can become smarter than humans in every way. But of course, machine learning is already being employed in millions of applications around the worldand its already starting to shape how we live and work, often in ways that go unseen. And while these technologies have been likened to destructive bots or blamed for artificial panic-induction, they are helping in vast ways from software to biotech.

Some of the sexier applications of machine learning are in emerging technologies like self-driving cars; thanks to ML, automated driving software can not only self-improve through millions of simulations, it can also adapt on the fly if faced with new circumstances while driving. But ML is possibly even more important in fields like software testing, which are universally employed and used for millions of other technologies.

So how exactly does machine learning affect the world of software development and testing, and what does the future of these interactions look like?

A Briefer on Machine Learning and Artificial Intelligence

First, lets explain the difference between ML and AI, since these technologies are related, but often confused with each other. Machine learning refers to a system of algorithms that are designed to help a computer improve automatically through the course of experience. In other words, through machine learning, a function (like facial recognition, or driving, or speech-to-text) can get better and better through ongoing testing and refinement; to the outside observer, the system looks like its learning.

AI is considered an intelligence demonstrated by a machine, and it often uses ML as its foundation. Its possible to have a ML system without demonstrating AI, but its hard to have AI without ML.

The Importance of Software Testing

Now, lets take a look at software testinga crucial element of the software development process, and arguably, the most important. Software testing is designed to make sure the product is functioning as intended, and in most cases, its a process that plays out many times over the course of development, before the product is actually finished.

Through software testing, you can proactively identify bugs and other flaws before they become a real problem, and correct them. You can also evaluate a products capacity, using tests to evaluate its speed and performance under a variety of different situations. Ultimately, this results in a better, more reliable productand lower maintenance costs over the products lifetime.

Attempting to deliver a software product without complete testing would be akin to building a large structure devoid of a true foundation. In fact, it is estimated that the cost of post software delivery can 4-5x the overall cost of the project itself when proper testing has not been fully implemented. When it comes to software development, failing to test is failing to plan.

How Machine Learning Is Reshaping Software Testing

Here, we can combine the two. How is machine learning reshaping the world of software development and testing for the better?

The simple answer is that ML is already being used by software testers to automate and improve the testing process. Its typically used in combination with the agile methodology, which puts an emphasis on continuous delivery and incremental, iterative developmentrather than building an entire product all at once. Its one of the reasons, I have argued that the future of agile and scrum methodologies involve a great deal of machine learning and artificial intelligence.

Machine learning can improve software testing in many ways:

While cognitive computing holds the promise of further automating a mundane, but hugely important process, difficulties remain. We are nowhere near the level of process automation acuity required for full-blown automation. Even in todays best software testing environments, machine learning aids in batch processing bundled code-sets, allowing for testing and resolving issues with large data without the need to decouple, except in instances when errors occur. And, even when errors do occur, the structured ML will alert the user who can mark the issue for future machine or human amendments and continue its automated testing processes.

Already, ML-based software testing is improving consistency, reducing errors, saving time, and all the while, lowering costs. As it becomes more advanced, its going to reshape the field of software testing in new and even more innovative ways. But, the critical piece there is going to. While we are not yet there, we expect the next decade will continue to improve how software developers iterate toward a finished process in record time. Its only one reason the future of software development will not be nearly as custom as it once was.

Nate Nead is the CEO of SEO.co/; a full-service SEO company and DEV.co/; a custom web and software development business. For over a decade Nate had provided strategic guidance on technology and marketing solutions for some of the most well-known online brands. He and his team advise Fortune 500 and SMB clients on software, development and online marketing. Nate and his team are based in Seattle, Washington and West Palm Beach, Florida.

View post:
How Machine Learning Will Impact the Future of Software Development and Testing - ReadWrite

The State-Of Machine Learning Adoption in the Enterprise – CIO Applications

Machine learning libraries with well-defined interfaces and documentation are becoming more accessible and therefore facilitating its adoption

We rely on Cross Functional Team setup (Develop Advocates), focus on Transformational or Disruptive solutions, Customer Pain points/Solutions and Communication/Marketing.

How do you see the evolution of machine learning within the next few years with regard to some of its potential disruptions and transformations?

Industry is still navigating throughout the ML hype. There are a few ML-based applications that have been successfully deployed and added to applications found in the marketplace. For example, voice and image recognition, service brokering and matchmaking, consumer forecast, etc. have found their place in the domestic use but these are still far away from truly becoming disruptions in the industrial space.

Critical aspects in the success of ML evolution are:

The reduction in complexity for mapping the domain expertise to ML-based solutions. Today there is no straightforward path to transfer domain knowledge to the data scientists where there is still a high dependency on.

As we continue to mature and descend from the ML hype, we will soon realize that not all industrial processes are suited for ML. This aspect still needs to be settled.

Provide greater access to ML automation.

Legacy systems (server/IaaS based) are decelerators in the ML evolution. These tools need to undergo structural upgrades to be able to cope with the new wave of data and analytics requirements (scalability, volume, speed, multitenancy, etc.). New data and compute frameworks are going to be needed to reduce complexity while increasing automation.

Agile change management cycles.

ML-model management soon to become a critical-path need.

A workforce skillset aligned with the know-how to map ML to domain expertise is precious.

What would be the single piece of advice that you could impart to a fellow or aspiring professional in your field embarking on a similar venture or professional journey along the lines of your service and area of expertise?

Think outside the box. Protect a portion of your resources allocated to transformation.

Use Open Source technologies and university partnership and internship programs to pilot solutions to prove out ROI. Companies have been restricting development to their internally conceived software solutions. However, it is now understood that no single player will be able to provide all the pieces of the overall solution. Therefore, there is value in looking for potential partnerships that would increase the chances to success.

Make IP solutions accessible to the industry and let other ideas into the internal design process. This implies the need for a cultural transformation. Look at effective business and pricing models. Perhaps one can achieve more effective business by partnering accordingly. And lastly, using resources to create an all-encompassing solution hinders the ability of a company to rapidly adapt to a fast-pace technology evolution. So develop while youre small and then grow.

Here is the original post:
The State-Of Machine Learning Adoption in the Enterprise - CIO Applications

Machine Learning: The Future from the Perspective of Model Building – CIO Applications

A good example of this is using specific gestures to raise or lower the volume or to changetracks--instead of pushing buttons to navigate your cars entertainment system. Some companies, such as Arcturus Networks, are building software modules for surveillance cameras and then selling them to camera manufacturers for integration into their end products. These are just a few examples of the types of companies that are popping up with specialties related to application functions.

Everybody can talk about a neural network, but it is essential to understand what it really means and the value it brings to finding other ways of solving problems

The main driver for us is figuring out how to make open-source technologies easier for our customers to use. The NXP eIQ Machine Learning Software Development Environment is continuously expanding to include model conversion for a wide range of NN frameworks and inference engines, such as TensorFlow Lite and Glow (the PyTorch Compiler). There are also open-source technologies from Arm, such as Arm NN, that will enable higher performance machine learning on ArmCortex Aprocessors. We are even using open-source inference engines to enable machine learning accelerators in our devices. Case in point is our new device called the i.MX 8M Plus. This is our first applications processor featuring an integrated machine learning accelerator that delivers two to three times more performance than NXP devices without it. And, integrating higher performance machine learning capability with acceleration is one of the emerging trends in the industry.

Whats Next?

The problem is that machine learning, or AI in general, is such a fast-growing area. The good and bad is that there have been far too many different technologies to keep up with and for us to support. Moving into the future, the technology around today will either be merged or well start to see more de facto standards. For instance, TensorFlow is something thats not going to go away and represents a significant share of the machine learning developers. On the other hand, PyTorch has quickly been gaining in popularity, especially in the academic community. Other similar technologies created with a specific purpose in mind may be useful, but industry adoption is low. These outliers may merge or disappear in the future. This is perhaps one of the main trends that I see moving forward.

A few years down the road, machine learning will become a de facto standard, and youll see it implemented in a majority of devices because people will realize that its not magicand the good tools that are already available to make it work are getting better. And, you dont have to be a data scientist or an expert in understanding neural network technology to integrate machine learning into your platform. And thats one area where we also spend a lot of time at NXP -- how do we make it easier for customers to deploy their machine learning models on our devices. We see both performance improvements and memory size reductions as the technology is becoming more optimized, so thats going to be a significant way forward.

Piece of Advice

As previously mentioned, we have developed a technology called eIQ for edge intelligence. I encourage people to check it out, try walking through some of the application examples, and experience machine learning in action. Like most of us, if youre trying to learn more about this technology, there are many good YouTube videos and an abundance of articles you just have to spend the time filtering through them. But you can learn a lot by what people have posted online: everything from the basics of what is a neural network, how to train a neural network, how to make it more performance efficient and more accurate, and so on. Theres plenty of information available for people who are starting. One exciting thing about machine learning, which applies to other technologies as well, is that the more you learn about it, the more you realize you dont know. Everybody can talk about a neural network, but understanding what it really means and its value in solving problems is essential to unlocking machine learnings extraordinary potential.

Here is the original post:
Machine Learning: The Future from the Perspective of Model Building - CIO Applications

Maintaining the Human Element in Machine Learning Gigaom – Gigaom

Thought Leadership Webinars

Credit: jamesteohart

Join us for this free 1-hour webinar from GigaOm Research. The webinar features GigaOm analyst Andrew Brust and special guest Nicolas Omont from Dataiku, a leader across the entire AI lifecycle.

In this 1-hour webinar, you will discover:

Machine learning (ML) and ML operations platforms are becoming increasingly popular and sophisticated. Thats a good thing, as it transforms AI initiatives from science projects to rigorous engineering efforts. But with such platforms comes the temptation of automation, scripting the whole ML process, not just optimizing models, but monitoring their drift in accuracy and retraining them. While some automation is good, humans play a critical role.

Elements of fairness are contextual and involve tradeoffs. Changes in data may require retraining or restructuring a models features, depending on circumstances and current events. All of this requires human judgment, carefully integrated with automated management and algorithmic learning. Humans have to be part of the workflow, included in the feedback loop, and involved in the process.

Read more:
Maintaining the Human Element in Machine Learning Gigaom - Gigaom