Data scientists are pushing the boundaries of analytics and making a fortune. This is how you can join them. – The Next Web

TLDR: From data analysis to machine learning to artificial intelligence, The 2020 All-in-One Data Scientist Mega Bundle this training explains it all.

In the event you arent a numbers person, its entirely possible that the science of breaking down data, putting it into a new configuration and thereby recontextualizing its meaning may feel like geek gobbledygook.

But data science isnt about making information impenetrably complex. Its about making huge data sets more relatable to the average person. Like new mom and data scientist Caitlin Hudon, who used her professional skills to visual map the changes her new baby had on Moms daily routine. And in case you couldnt guess, the change was seismic.

While this is just a look at one womans schedule, its a perfect example of how one glance at the right visualization can bring big data to an infinitely personal level. And since the average data scientist is making over $110,000 a year, its definitely a skill worth knowing. You can understand all the principles with the expansive 2020 All-in-One Data Scientist Mega Bundle of training, now just $39.99, over 90 percent off, from TNW Deals.

This ginormous collection of 12 courses and more than 140 hours of instruction may look intimidating, but its all geared to helping the uninitiated grasps how data analytics actually work, from understanding how to store, manage and sort data to using the tools professional data managers use to find hidden meaning in all those numbers.

Whether its Hadoops networking power or analytics engine Apache Spark or database manager MongoDB, this training unlocks the right apps that make analysis that would have been nearly impossible before a lot more manageable.

Once youve formatted your data, youll also have background in Tableau 10, the worlds most popular data visualization software for logging and displaying the conclusions in a whole new way. Of course, if youre a Microsoft Excel diehard, theres even instruction here in how that app warhorse can play a key role in 2020 data analysis.

Meanwhile, once youve tackled courses in coding languages like Python and R programming, youll be ready to apply those languages in the greatest new frontier in data science: machine learning and artificial intelligence. An additional three courses explore this fascinating and expanding new field, teaching computers to process data, understand results and change behaviors all on their own.

Usually $6,000, this university-level training with all the resources is now available for only $39.99 with this limited time deal.

Prices are subject to change.

Read next: Clearview AI can be fun if youre dirty, stinking rich

See original here:
Data scientists are pushing the boundaries of analytics and making a fortune. This is how you can join them. - The Next Web

3 important trends in AI/ML you might be missing – VentureBeat

According to a Gartner survey, 48% of global CIOs will deploy AI by the end of 2020. However, despite all the optimism around AI and ML, I continue to be a little skeptical. In the near future, I dont foresee any real inventions that will lead to seismic shifts in productivity and the standard of living. Businesses waiting for major disruption in the AI/ML landscape will miss the smaller developments.

Here are some trends that may be going unnoticed at the moment but will have big long-term impacts:

Gone are the days when on-premises versus cloud was a hot topic of debate for enterprises. Today, even conservative organizations are talking cloud and open source. No wonder cloud platforms are revamping their offerings to include AI/ML services.

With ML solutions becoming more demanding in nature, the number of CPUs and RAM are no longer the only way to speed up or scale. More algorithms are being optimized for specific hardware than ever before be it GPUs, TPUs, or Wafer Scale Engines. This shift towards more specialized hardware to solve AI/ML problems will accelerate. Organizations will limit their use of CPUs to solve only the most basic problems. The risk of being obsolete will render generic compute infrastructure for ML/AI unviable. Thats reason enough for organizations to switch to cloud platforms.

The increase in specialized chips and hardware will also lead to incremental algorithm improvements leveraging the hardware. While new hardware/chips may allow use of AI/ML solutions that were earlier considered slow/impossible, a lot of the open-source tooling that currently powers the generic hardware needs to be rewritten to benefit from the newer chips. Recent examples of algorithm improvements include Sidewaysto speed up DL training by parallelizing the training steps, andReformerto optimize the use of memory and compute power.

I also foresee a gradual shift in the focus on data privacy towards privacy implications on ML models. A lot of emphasis has been placed on how and what data we gather and how we use it. But ML models are not true black boxes. It is possible to infer the model inputs based on outputs over time. This leads to privacy leakage. Challenges in data and model privacy will force organizations to embrace federated learningsolutions. Last year, Google releasedTensorFlow Privacy, a framework that works on the principle of differential privacy and the addition of noise to obscure inputs. With federated learning, a users data never leaves their device/machine. These machine learning models are smart enough and have a small enough memory footprint to run on smartphones and learn from the data locally.

Usually, the basis for asking for a users data was to personalize their individual experience. For example, Google Mail uses the individual users typing behavior to provide autosuggest. What about data/models that will help improve the experience not just for that individual but for a wider group of people? Would people be willing to share their trained model (not data) to benefit others? There is an interesting business opportunity here: paying users for model parameters that come from training on the data on their local device and using their local computing power to train models (for example, on their phone when it is relatively idle).

Currently, organizations are struggling to productionize models for scalability and reliability. The people who are writing the models are not necessarily experts on how to deploy them with model safety, security, and performance in mind. Once machine learning models become an integral part of mainstream and critical applications, this will inevitably lead to attacks on models similar to the denial-of-service attacks mainstream apps currently face. Weve already seen some low-tech examples of what this could look like: making a Tesla speed up instead of slow down, switch lanes, abruptly stop, or turning on wipers without proper triggers. Imagine the impacts such attacks could have on financial systems, healthcare equipment, etc. that rely heavily on AI/ML?

Currently, adversarial attacks are limited to academia to understand the implications of models better. But in the not too distant future, attacks on models will be for profit driven by your competitors who want to show they are somehow better, or by malicious hackers who may hold you to ransom. For example, new cybersecurity tools today rely on AI/ML to identify threats like network intrusions and viruses. What if I am able to trigger fake threats? What would be the costs associated with identifying real-vs-fake alerts?

To counter such threats, organizations need to put more emphasis on model verification to ensure robustness. Some organizations are already using adversarial networks to test deep neural networks. Today, we hire external experts to audit network security, physical security, etc. Similarly, we will see the emergence of a new market for model testing and model security experts, who will test, certify, and maybe take on some liability of model failure.

Organizations aspiring to drive value through their AI investments need to revisit the implications on their data pipelines. The trends Ive outlined above underscore the need for organizations to implement strong governance around their AI/ML solutions in production. Its too risky to assume your AI/ML models are robust, especially when theyre left to the mercy of platform providers. Therefore, the need of the hour is to have in-house experts who understand why models work or dont work. And thats one trend thats here to stay.

Sudharsan Rangarajan is Vice President of Engineering at Publicis Sapient.

Original post:
3 important trends in AI/ML you might be missing - VentureBeat

How is AI and machine learning benefiting the healthcare industry? – Health Europa

In order to help build increasingly effective care pathways in healthcare, modern artificial intelligence technologies must be adopted and embraced. Events such as the AI & Machine Learning Convention are essential in providing medical experts around the UK access to the latest technologies, products and services that are revolutionising the future of care pathways in the healthcare industry.

AI has the potential to save the lives of current and future patients and is something that is starting to be seen across healthcare services across the UK. Looking at diagnostics alone, there have been large scale developments in rapid image recognition, symptom checking and risk stratification.

AI can also be used to personalise health screening and treatments for cancer, not only benefiting the patient but clinicians too enabling them to make the best use of their skills, informing decisions and saving time.

The potential AI will have on the NHS is clear, so much so, NHS England is setting up a national artificial intelligence laboratory to enhance the care of patients and research.

The Health Secretary, Matt Hancock, commented that AI had enormous power to improve care, save lives and ensure that doctors had more time to spend with patients, so he pledged 250M to boost the role of AI within the health service.

The AI and Machine Learning Convention is a part of Mediweek, the largest healthcare event in the UK and as a new feature of the Medical Imaging Convention and the Oncology Convention, the AI and Machine Learning expo offer an effective CPD accredited education programme.

Hosting over 50 professional-led seminars, the lineup includes leading artificial intelligence and machine learning experts such as NHS Englands Dr Minai Bakhai, Faculty of Clinical Informatics Professor Jeremy Wyatt, and Professor Claudia Pagliari from the University of Edinburgh.

Other speakers in the seminar programme come from leading organisations such as the University of Oxford, Kings College London, and the School of Medicine at the University of Nottingham.

The event all takes place at the National Exhibition Centre, Birmingham on the 17th and 18th March 2020. Tickets to the AI and Machine Learning are free and gains you access to the other seven shows within MediWeek.

Health Europa is proud to be partners with the AI and Machine Learning Convention, click here to get your tickets.

Do you want the latest news and updates from Health Europa? Click here to subscribe to all the latest updates and stay connected with us here.

See the original post:
How is AI and machine learning benefiting the healthcare industry? - Health Europa

An implant uses machine learning to give amputees control over prosthetic hands – MIT Technology Review

Researchers have been working to make mind-controlled prosthetics a reality for at least a decade. In theory, an artificial hand that amputees could control with their mind could restore their ability to carry out all sorts of daily tasks, and dramatically improve their standard of living.

However, until now scientists have faced a major barrier: they havent been able to access nerve signals that are strong or stable enough to send to the bionic limb. Although its possible to get this sort of signal using a brain-machine interface, the procedure to implant one is invasive and costly. And the nerve signals carried by the peripheral nerves that fan out from the brain and spinal cord are too small.

A new implant gets around this problem by using machine learning to amplify these signals. A study, published in Science Translational Medicine today, found that it worked for four amputees for almost a year. It gave them fine control of their prosthetic hands and let them pick up miniature play bricks, grasp items like soda cans, and play Rock, Paper, Scissors.

Sign up for The Algorithm artificial intelligence, demystified

Its the first time researchers have recorded millivolt signals from a nervefar stronger than any previous study.

The strength of this signal allowed the researchers to train algorithms to translate them into movements. The first time we switched it on, it worked immediately, says Paul Cederna, a biomechanics professor at the University of Michigan, who co-led the study. There was no gap between thought and movement.

The procedure for the implant requires one of the amputees peripheral nerves to be cut and stitched up to the muscle. The site heals, developing nerves and blood vessels over three months. Electrodes are then implanted into these sites, allowing a nerve signal to be recorded and passed on to a prosthetic hand in real time. The signals are turned into movements using machine-learning algorithms (the same types that are used for brain-machine interfaces).

Amputees wearing the prosthetic hand were able to control each individual finger and swivel their thumbs, regardless of how recently they had lost their limb. Their nerve signals were recorded for a few minutes to calibrate the algorithms to their individual signals, but after that each implant worked straight away, without any need to recalibrate during the 300 days of testing, according to study co-leader Cynthia Chestek, an associate professor in biomedical engineering at the University of Michigan.

Its just a proof-of-concept study, so it requires further testing to validate the results. The researchers are recruiting amputees for an ongoing clinical trial, funded by DARPA and the National Institutes of Health.

See the original post:
An implant uses machine learning to give amputees control over prosthetic hands - MIT Technology Review

Tying everything together Solving a Machine Learning problem in the Cloud (Part 4 of 4) – Microsoft – Channel 9

This is the final, part 4 of a four-part series that breaks up a talk that I gave at the Toronto AI Meetup. Part 1, Part 2 and Part 3 were all about the foundations of machine learning, optimization, models, and even machine learning in the cloud. In this video I show an actual machine learning problem (see the GitHub repo for the code) that does the important job of distinguishing between tacos and burritos (an important problem to be sure). The primary concepts included is MLOps both on the machine learning side as well as the deliver side in Azure Machine Learning and Azure DevOps respectively.

Hope you enjoy the final of the series, Part 4! As always feel free to send any feedback or add any comments below if you have any questions. If you would like to see more of this style of content let me know!

The AI Show's Favorite links:

The rest is here:
Tying everything together Solving a Machine Learning problem in the Cloud (Part 4 of 4) - Microsoft - Channel 9

Department of Energy Announces $30 Million for New Research on Fusion Energy | Department – Energy.gov

Research Will Include Artificial Intelligence and Machine Learning Approaches as well as Fundamental Theory & Simulation

WASHINGTON, D.C.Today, theU.S. Department of Energy (DOE)announced a plan to provide $30 million for new research on fusion energy.

This funding will provide $17 million for research focused specifically on artificial intelligence (AI) and machine learning (ML) approaches for prediction of key plasma phenomena, management of facility operations, and accelerated discovery through data science, among other topics.

An additional $13 million under a separate funding opportunity will be devoted to fundamental fusion theory research, including computer modeling and simulation, focused on factors affecting the behavior of hot plasmas confined by magnetic fields in fusion reactors.

Recent advancements in Artificial Intelligence and Machine Learning technologies can bring new, transformative approaches to tackling fusion energy theories and challenges, said Secretary of Energy Dan Brouillette. The research funded under these initiatives will be integral to overcoming important barriers to the development of fusion as a practical energy source.

By allocating $30 million towards fusion energy, the Department of Energy is continuing its commitment to advance scientific research and U.S. global competitiveness, said Under Secretary for Science Paul Dabbar. This funding only emphasizes our support for artificial intelligence and machine learning capabilities.

Applications for the AI/ML funding are open to national laboratories, universities, nonprofits, and private sector companies, working either singly or with multiple institutional partners.Total funding planned for the program is $17 million for projects of two to three years in duration, with $7 million available in FY 2020 and outyear funding contingent on congressional appropriations.

Applications for the theory funding are open to universities, nonprofits, and private sector companies. Funding is expected to be in the form of three-year grants. Total planned funding will be up to $13 million over three years, with up to $7 million available in FY 2020 and outyear funding contingent on congressional appropriations.

The two separate DOE Funding Opportunity Announcements, along with a companion national laboratory call for the AI/ML research, are to be found on thefunding opportunities pageof the Office of Fusion Energy Sciences within the Departments Office of Science.

###

View original post here:
Department of Energy Announces $30 Million for New Research on Fusion Energy | Department - Energy.gov

Improving your Accounts Payable Process with Machine Learning in D365 FO and AX – MSDynamicsWorld.com

Everywhere you look there's another article written about machine learning and automation. You understand the concepts but aren't sure how it applies to your day-to-day job.

If you work with Dynamics 365 Finance and Operations or AX in a Finance or Accounts Payable role, you probably say to yourself, Theres gotta be a better way to do this. But with your limited time and resources, the prospect of modernizing your AP processes seems unrealistic right now.

If this describes you, then dont sweat! Weve done all the legwork to bring machine learning to AP and specifically for companies using Dynamics 365 or AX.

Join us to learn about:

To learn about our findings, join us on Wednesday March 25th at any of three times for our "Improving your Accounts Payable Process with Machine Learning" webinar.

Go here to read the rest:
Improving your Accounts Payable Process with Machine Learning in D365 FO and AX - MSDynamicsWorld.com

Machine learning and the power of big data can help achieve stronger investment decisions – BNNBloomberg.ca

Will machines rise against us?

Sarah Ryerson, President of TMX Datalinx, is certain we dont need to worry about that. And its safe to say we can trust her opinion with data being her specialty, as well as having spent five years at Google before joining TMX.

She applies her experience on Bay Street by helping traders, investors and analysts mine the daily avalanche of data that pours out of TMX every day.

If information is power what will we be doing with data in the future?

Ryerson has the answer, explaining that we will be mining data for patterns and signals that will help us draw new insights and allow us to make better investment decisions.

Ryerson is bringing real-time, historical and alternative data together for TMX clients. Its all about picking up the signals and patterns that the combined data set that will deliver.

She also affirms that she is aiming to make this information more accessible. This will be done through platforms where investors can do their own analysis via easy-to-use distribution channels where they can get the data they want through customized queries. Ryerson notes, Machine learning came into its own because we now have the computing power and available data for that iterate and learn opportunity.

Ryerson knows that for savvy investors to get ahead of algorithms, machine learning or artificial intelligence (AI), they need more than buy-and-sell data. This could be weather data, pricing data, sentiment data from social media or alternative data. When you combine techniques to the vast amounts of data we have thats where we can derive new insights from combinations of data we havent been able to analyze before.

One of the most important elements of AI that data scientists realize is that algorithms cant be black boxes. The analysts and investors using them need transparency to understand why an algorithm is advising to buy, sell or hold.

Looking further into the future, Ryerson believes, We will be seeing more data and better investment decisions because of the insights were getting from a combined set of data.

Thats a lot of data to dissect!

See more here:
Machine learning and the power of big data can help achieve stronger investment decisions - BNNBloomberg.ca

Machine Learning at the Push of a Button – EE Journal

Physician, heal thyself Luke 4:23

My Thermos bottle keeps hot drinks hot and cold drinks cold. How does it know?

An electrical engineer would probably design a Thermos with a toggle switch (HOT and COLD), or a big temperature dial, or if you work in Cupertino an LCD display, touchpad, RTOS, and proprietary cable interface. Thankfully, real vacuum flasks take care of themselves with no user input at all. They just work.

It would sure be nice if new AI-enabled IoT devices could do the same thing. Instead of learning all about AI and ML (and the differences between the two), and learning how to code neural nets, and how to train them, and what type of data they require, and how to provision the hardware, etc., itd be great if they just somehow knew what to do. Now that would be real machine learning.

Guess what? A small French company thinks it has developed that very trick. It uses machine learning to teach machine learning. To machines. Without a lot of user input. It takes the mystery, mastery, and mythology out of ML, while allowing engineers and programmers to create smart devices with little or no training.

The company is Cartesiam and the product is called NanoEdge AI Studio. Its a software-only tool that cranks out learning and inference code for ARM Cortex-Mbased devices, sort of like an IDE for ML. The user interface is pretty to look at and has only a few virtual knobs and dials that you get to twist. All the rest is automatic. Under the right circumstances, its even free.

Cartesiams thesis is that ML is hard, and that developing embedded AI requires special skills that most of us dont have. You could hire a qualified data scientist to analyze your system and develop a good model, but such specialists are hard to find and expensive when theyre available. Plus, your new hire will probably need a year or so to complete their analysis and thats before you start coding or even know what sort of hardware youll need.

Instead, Cartesiam figures that most smart IoT devices have certain things in common and dont need their own full-time, dedicated data scientist to figure things out, just like you dont need a compiler expert to write C code or a physicist to draw a schematic. Let the tool do the work.

The company uses preventive motor maintenance as an example. Say you want to predict when a motor will wear out and fail. You could simply schedule replacement every few thousand hours (the equivalent of a regular 5000-mile oil change in your car), or you could be smart and instrument the motor and try to sense impending failures. But what sensors would you use, and how exactly would they detect a failure? What does a motor failure look like, anyway?

With NanoEdge AI Studio, you give it some samples of good data and some samples of bad data, and let it learn the difference. It then builds a model based on your criteria and emits code that you link into your system. Done.

You get to tweak the knobs for MCU type, RAM size, and type of sensor. In this case, a vibration sensor/accelerometer would be appropriate, and the data samples can be gathered in real-time or canned; it doesnt matter. You can also dial-in the level of accuracy and the level of confidence in the model. These last two trade off precision for memory footprint.

NanoEdge Studio includes a software simulator, so you can test out your code without burning any ROMs or downloading to a prototype board. That should make it quicker to test out various inference models to get the right balance. Cartesiam says it can produce more than 500 million different ML libraries, so its not simply a cut-and-paste tool.

As another example, Cartesiam described one customer designing a safety alarm for swimming pools. They spent days tossing small children into variously shaped pools to collect data, and then several months analyzing the data to tease out the distinguishing characteristics of a good splash versus one that should trigger the alarm. NanoEdge AI Studio accomplished the latter task in minutes and was just as accurate. Yet another customer uses it to detect when a vacuum cleaner bag needs emptying. Such is the world of smart device design.

The overarching theme here is that users dont have to know much of anything about machine learning, neural nets, inference, and other arcana. Just throw data at it and let the tool figure it out. Like any EDA tool, it trades abstraction for productivity.

In todays environment, thats a good tradeoff. Experienced data scientists are few and far between. Moreover, you probably wont need his/her talents long-term. When the project is complete and youve got your detailed model, what then?

NanoEdge AI Studio is free to try but deploying actual code in production costs money. Cartesiam describes the royalty as tens of cents to a few dollars, depending on volume. Sounds cheaper than hiring an ML specialist.

Related

View post:
Machine Learning at the Push of a Button - EE Journal

Is Machine Learning Always The Right Choice? – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

By: Mark Krupnik, PhD, Founder and CEO, Retalon

Since this article will probably come out during Income tax season, let me start with the following example: Suppose we would like to build a program that calculates income tax for people. According to US federal income tax rules: For single filers, all income less than $9,875 is subject to a 10% tax rate. Therefore, if you have $9,900 in taxable income, the first$9,875 is subject to the 10% rate and the remaining $25 is subject to the tax rate of the next bracket (12%).

This is an example of rules or an algorithm (set of instructions) for a computer.

Lets look at this from a formal, pragmatic point of view. A computer equipped with this program can achieve the goal (calculate tax) without human help. So technically, this can be classified as Artificial Intelligence.

But is it cool enough? No. Its not. That is why many people would not consider it part of AI. They may say that if we already know how to do a certain thing, then the process cannot be considered real intelligence. This is a phenomena that has become known as AI Effect. One of the first references is known as Teslers theorem that says: AI is whatever hasnt been done yet.

In the eyes of some people, the cool part of AI is associated with machine learning, and more specifically with deep learning which requires no instructions and utilizes Neural Nets to learn everything by itself, like a human brain.

The reality is that human development is a combination of multiple processes, including both: instructions, and Neural Net training, as well as many other things.

Lets take another simple example: If you work in a workshop on a complex project, you may need several tools, for instance a hammer, a screwdriver, plyers, etc. Of course, you can make up a task that can be solved by only using a hammer or only screwdriver, but for most real-life projects you will likely need to use various tools in combination to a certain extent.

In the same manner, AI also consists of several tools (such as algorithms, supervised and unsupervised machine learning, etc.). Solving a real-life problem requires a combination of these tools, and depending on the task, they can be used in different proportions or not used at all.

There are and there will always be situations where each of these methods will be preferred over others.

For example, the tax calculation task described in the beginning of this article will probably not be delegated to machine learning. There are good reasons to it, for example:

the solution of this problem does not depend on data the process should be controllable, observable, and 100% accurate (You cant just be 80% accurate on your income taxes)

However, the task to assess income tax submissions to identify potential fraud is a perfect application for ML technologies.

Equipped with a number of well labelled data inputs (age, gender, address, education, National Occupational Classification code, job title, salary, deductions, calculated tax, last year tax, and many others) and using the same type of information available from millions of other people, ML models can quickly identify outliers.

What happens next? The outliers in data are not necessarily all fraud. Data scientists will analyse anomalies and try to understand the reason for these individuals being flagged. It is quite possible that they will find some additional factors that had to be considered (feature engineering), for example a split between tax on salary, and tax on capital gain of investment. In this case, they would probably add an instruction to the computer to split this data set based on income type. At this very moment, we are not dealing with a pure ML model anymore (as the scientists just added an instruction), but rather with a combination of multiple AI tools.

ML is a great technology that can already solve many specific tasks. It will certainly expand to many areas, due to its ability to adapt to change without major effort on a human side.

At the same time, those segments that can be solved using specific instructions and require predictable outcome (financial calculations) or those involving high risk (human life, health, very expensive and risky projects) require more control and if the algorithmic approach can provide it, it will still be used.

For practical reasons, to solve any specific complex problem, the right combination of tools and methods of both types are required.

About the Author:

Mark Krupnik, PhD, is the founder and CEO ofRetalon, an award-winning provider of retail AI and predictive analytics solutions for planning, inventory optimization, merchandising, pricing and promotions.Mark is a leading expert on building and delivering state-of-the-art solutions for retailers.

See the original post here:
Is Machine Learning Always The Right Choice? - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times