Machine Learning Engineer Interview Questions: What You Need to Know – Dice Insights

Along with artificial intelligence (A.I.), machine learning is regarded as one of the most in-demand areas for tech employment at the moment. Machine learning engineers develop algorithms and models that can adapt and learn from data. As a result, those who thrive in this discipline are generally skilled not only in computer science and programming, but also statistics, data science, deep learning, and problem solving.

According to Burning Glass, which collects and analyzes millions of job postings from across the country, the prospects for machine learning as an employer-desirable skill are quite good, with jobs projected to rise 36.5 percent over the next decade. Moreover, even those with relatively little machine-learning experience can pull down quite a solid median salary:

Dice Insights spoke with Oliver Sulley, director of Edge Tech Headhunters, to figure out how you should prepare, what youll be asked during an interviewand what you should say to grab the gig.

Youre going to be faced potentially by bosses who dont necessarily know what it is that youre doing, or dont understand ML and have just been [told] they need to get it in the business, Sulley said. Theyre being told by the transformation guys that they need to bring it on board.

As he explained, that means one of the key challenges facing machine learning engineers is determining what technology would be most beneficial to the employer, and being able to work as a cohesive team that may have been put together on very short notice.

What a lot of companies are looking to do is take data theyve collected and stored, and try and get them to build some sort of model that helps them predict what they can be doing in the future, Sulley said. For example, how to make their stock leaner, or predicting trends that could come up over they year that would change their need for services that they offer.

Sulley notes that machine learning engineers are in rarified air at themomentits a high-demand position, and lots of companies are eager to show theyve brought machine learning specialists onboard.

If theyre confident on their skills, then a lot of the time they have to make sure the role is right for them, Sulley said. Its more about the soft skills that are going to be important.

Many machine learning engineers are strong on the technical side, but they often have to interact with teams such as operations; as such, they need to be able to translate technical specifics into laymans terms and express how this data is going to benefit other areas of the company.

Building those soft skills, and making sure people understand how you will work in a team, is just as important at this moment in time, Sulley added.

There are quite a few different roles for machine learning engineers, and so its likely that all these questions could come upbut it will depend on the position. We find questions with more practical experience are more common, and therefore will ask questions related to past work and the individual contributions engineers have made, Sulley said.

For example:

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

A lot of data engineering and machine learning roles involve working with different tech stacks, so its hard to nail down a hard and fast set of skills, as much depends on the company youre interviewing with.(If youre just starting out with machine learning, here are some resources that could prove useful.)

For example, if its a cloud based-role, a machine learning engineer is going to want to have experience with AWS and Azure; and for languages alone, Python and R are the most important, because thats what we see more and more in machine learning engineering, Sulley said. For deployment, Id say Docker, but it really depends on the persons background and what theyre looking to get into.

Sulley said ideal machine learning candidates posses a really analytical mind, as well as a passion for thinking about the world in terms of statistics.

Someone who can connect the dots and has a statistical mind, someone who has a head for numbers and who is interested in that outside of work, rather than someone who just considers it their job and what they do, he said.

As you can see from the following Burning Glass data, quite a few jobs now ask for machine-learning skills; if not essential, theyre often a nice to have for many employers that are thinking ahead.

Sulley suggests the questions you ask should be all about the technologyits about understanding what the companies are looking to build, what their vision is (and your potential contribution to it), and looking to see where your career will grow within that company.

You want to figure out whether youll have a clear progression forward, he said. From that, you will understand how much work theyre going to do with you. Find out what theyre really excited about, and that will help you figure out whether youll be a valued member of the team. Its a really exciting space, and they should be excited by the opportunities that come with bringing you onboard.

View original post here:
Machine Learning Engineer Interview Questions: What You Need to Know - Dice Insights

Will COVID-19 Create a Big Moment for AI and Machine Learning? – Dice Insights

COVID-19 will change how the majority of us live and work, at least in the short term. Its also creating a challenge for tech companies such as Facebook, Twitter and Google that ordinarily rely on lots and lots of human labor to moderate content. Are A.I. and machine learning advanced enough to help these firms handle the disruption?

First, its worth noting that, although Facebook has instituted a sweeping work-from-home policy in order to protect its workers (along with Googleand a rising number of other firms), it initially required its contractors who moderate content to continue to come into the office. That situation only changed after protests,according toThe Intercept.

Now, Facebook is paying those contractors while they sit at home, since the nature of their work (scanning peoples posts for content that violates Facebooks terms of service) is extremely privacy-sensitive. Heres Facebooks statement:

For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons. We have taken precautions to protect our workers by cutting down the number of people in any given office, implementing recommended work from home globally, physically spreading people out at any given office and doing additional cleaning. Given the rapidly evolving public health concerns, we are taking additional steps to protect our teams and will be working with our partners over the course of this week to send all contract workers who perform content review home, until further notice. Well ensure that all workers are paid during this time.

Facebook, Twitter, Reddit, and other companies are in the same proverbial boat: Theres an increasing need to police their respective platforms, if only to eliminate fake news about COVID-19, but the workers who handle such tasks cant necessarily do so from home, especially on their personal laptops. The potential solution? Artificial intelligence (A.I.) and machine-learning algorithms meant to scan questionable content and make a decision about whether to eliminate it.

HeresGoogles statement on the matter, via its YouTube Creator Blog.

Our Community Guidelines enforcement today is based on a combination of people and technology: Machine learning helps detect potentially harmful content and then sends it to human reviewers for assessment. As a result of the new measures were taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.

To be fair, the tech industry has been heading in this direction for some time. Relying on armies of human beings to read through every piece of content on the web is expensive, time-consuming, and prone to error. But A.I. and machine learning are still nascent, despite the hype. Google itself, in the aforementioned blog posting, pointed out how its automated systems may flag the wrong videos. Facebook is also receiving criticism that its automated anti-spam system is whacking the wrong posts, including those thatoffer vital information on the spread of COVID-19.

If the COVID-19 crisis drags on, though, more companies will no doubt turn to automation as a potential solution to disruptions in their workflow and other processes. That will force a steep learning curve; again and again, the rollout of A.I. platforms has demonstrated that, while the potential of the technology is there, implementation is often a rough and expensive processjust look at Google Duplex.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Nonetheless, an aggressive embrace of A.I. will also create more opportunities for those technologists who have mastered A.I. and machine-learning skills of any sort; these folks may find themselves tasked with figuring out how to automate core processes in order to keep businesses running.

Before the virus emerged, BurningGlass (which analyzes millions of job postings from across the U.S.), estimated that jobs that involve A.I. would grow 40.1 percent over the next decade. That percentage could rise even higher if the crisis fundamentally alters how people across the world live and work. (The median salary for these positions is $105,007; for those with a PhD, it drifts up to $112,300.)

If youre trapped at home and have some time to learn a little bit more about A.I., it could be worth your time to explore online learning resources. For instance, theres aGooglecrash coursein machine learning. Hacker Noonalso offers an interesting breakdown ofmachine learningandartificial intelligence.Then theres Bloombergs Foundations of Machine Learning,a free online coursethat teaches advanced concepts such as optimization and kernel methods.

View post:
Will COVID-19 Create a Big Moment for AI and Machine Learning? - Dice Insights

Self-driving truck boss: ‘Supervised machine learning doesnt live up to the hype. It isnt C-3PO, its sophisticated pattern matching’ – The Register

Roundup Let's get cracking with some machine-learning news.

Starksy Robotics is no more: Self-driving truck startup Starsky Robotics has shut down after running out of money and failing to raise more funds.

CEO Stefan Seltz-Axmacher bid a touching farewell to his upstart, founded in 2016, in a Medium post this month. He was upfront and honest about why Starsky failed: Supervised machine learning doesnt live up to the hype, he declared. It isnt actual artificial intelligence akin to C-3PO, its a sophisticated pattern-matching tool.

Neural networks only learn to pick up on certain patterns after they are faced with millions of training examples. But driving is unpredictable, and the same route can differ day to day, depending on the weather or traffic conditions. Trying to model every scenario is not only impossible but expensive.

In fact, the better your model, the harder it is to find robust data sets of novel edge cases. Additionally, the better your model, the more accurate the data you need to improve it, Seltz-Axmacher said.

More time and money is needed to provide increasingly incremental improvements. Over time, only the most well funded startups can afford to stay in the game, he said.

Whenever someone says autonomy is ten years away thats almost certainly what their thought is. There arent many startups that can survive ten years without shipping, which means that almost no current autonomous team will ever ship AI decision makers if this is the case, he warned.

If Seltz-Axmacher is right, then we should start seeing smaller autonomous driving startups shutting down in the near future too. Watch this space.

Waymo to pause testing during Bay Area lockdown: Waymo, Googles self-driving car stablemate, announced it was pausing its operations in California to abide by the lockdown orders in place in Bay Area counties, including San Francisco, Santa Clara, San Mateo, Marin, Contra Costa and Alameda. Businesses deemed non-essential were advised to close and residents were told to stay at home, only popping out for things like buying groceries.

It will, however, continue to perform rides for deliveries and trucking services for its riders and partners in Phoenix, Arizona. These drives will be entirely driverless, however, to minimise the chance of spreading COVID-19.

Waymo also launched its Open Dataset Challenge. Developers can take part in a contest that looks for solutions to these problems:

Cash prizes are up for grabs too. The winner can expect to pocket $15,000, second place will get you $5,000, while third is $2,000.

You can find out more details on the rules of the competition and how to enter here. The challenge is open until 31 May.

More free resources to fight COVID-19 with AI: Tech companies are trying to chip in and do what they can to help quell the coronavirus pandemic. Nvidia and Scale AI both offered free resources to help developers using machine learning to further COVID-19 research.

Nvidia is providing a free 90-day license to Parabricks, a software package that speeds up the process of analyzing genome sequences using GPUs. The rush is on to analyze the genetic information of people that have been infected with COVID-19 to find out how the disease spreads and which communities are most at risk. Sequencing genomes requires a lot of number crunching, Parabricks slashes the time needed to complete the task.

Given the unprecedented spread of the pandemic, getting results in hours versus days could have an extraordinary impact on understanding the viruss evolution and the development of vaccines, it said this week.

Interested customers who have access to Nvidias GPUs should fill out a form requesting access to Parabricks.

Nvidia is inviting our family of partners to join us in matching this urgent effort to assist the research community. Were in discussions with cloud service providers and supercomputing centers to provide compute resources and access to Parabricks on their platforms.

Next up is Scale AI, the San Francisco based startup focused on annotating data for machine learning models. It is offering its labeling services for free to any researcher working on a potential vaccine, or on tracking, containing, or diagnosing COVID-19.

Given the scale of the pandemic, researchers should have every tool at their disposal as they try to track and counter this virus, it said in a statement.

Researchers have already shown how new machine learning techniques can help shed new light on this virus. But as with all new diseases, this work is much harder when there is so little existing data to go on.

In those situations, the role of well-annotated data to train models o diagnostic tools is even more critical. If you have a lot of data to analyse and think Scale AI could help then apply for their help here.

PyTorch users, AWS has finally integrated the framework: Amazon has finally integrated PyTorch support into Amazon Elastic Inference, its service that allows users to select the right amount of GPU resources on top of CPUs rented out in its cloud services Amazon SageMaker and Amazon EC2, in order to run inference operations on machine learning models.

Amazon Elastic Inference works like this: instead of paying for expensive GPUs, users select the right amount of GPU-powered inference acceleration on top of cheaper CPUs to zip through the inference process.

In order to use the service, however, users will have to convert their PyTorch code into TorchScript, another framework. You can run your models in any production environment by converting PyTorch models into TorchScript, Amazon said this week. That code is then processed by an API in order to use Amazon Elastic Inference.

The instructions to convert PyTorch models into the right format for the service have been described here.

Sponsored: Webcast: Why you need managed detection and response

More:
Self-driving truck boss: 'Supervised machine learning doesnt live up to the hype. It isnt C-3PO, its sophisticated pattern matching' - The Register

Data to the Rescue! Predicting and Preventing Accidents at Sea – JAXenter

Watch Dr. Yonit Hoffman's Machine Learning Conference session

Accidents at sea happen all the time. Their costs in terms of lives, money and environmental destruction are huge. Wouldnt it be great if they could be predicted and perhaps prevented? Dr. Yonit Hoffmans Machine Learning Conference session discusses new ways of preventing sea accidents with the power of data science.

Does machine learning hold the key to preventing accidents at sea?

With more than 350 years of history, the marine insurance industry is the first data science profession to try to predict accidents and estimate future risk. Yet the old ways no longer work, new waves of data and algorithms can offer significant improvements and are going to revolutionise the industry.

In her Machine Learning Conference session, Dr. Yonit Hoffman will show that it is now possible to predict accidents, and how data on a ships behaviour such as location, speed, maps and weather can help. She will show how fragments of information on ship movements can be gathered and taken all the way to machine learning models. In this session, she discusses the challenges, including introducing machine learning to an industry that still uses paper and quills (yes, really!) and explaining the models using SHAP.

Dr. Yonit Hoffman is a Senior Data Scientist at Windward, a world leader in maritime risk analytics. Before investigating supertanker accidents, she researched human cells and cancer at the Weizmann Institute, where she received her PhD and MSc. in Bioinformatics. Yonit also holds a BSc. in computer science and biology from Tel Aviv University.

Visit link:
Data to the Rescue! Predicting and Preventing Accidents at Sea - JAXenter

The Well-matched Combo of Quantum Computing and Machine Learning – Analytics Insight

The pace of improvement in quantum computing mirrors the fast advances made in AI and machine learning. It is normal to ask whether quantum technologies could boost learning algorithms: this field of inquiry is called quantum-improved machine learning.

Quantum computers are gadgets that work dependent on principles from quantum physics. The computers that we at present use are constructed utilizing transistors and the information is stored as double 0 and 1. Quantum computers are manufactured utilizing subatomic particles called quantum bits, qubits for short, which can be in numerous states simultaneously. The principal advantage of quantum computers is that they can perform exceptionally complex tasks at supersonic velocities. In this way, they take care of issues that are not presently feasible.

The most significant advantage of quantum computers is the speed at which it can take care of complex issues. While theyre lightning speedy at what they do, they dont give abilities to take care of issues from undecidable or NP-Hard problem classes. There is a problem set that quantum computing will have the option to explain, anyway, its not applicable for all computing problems.

Ordinarily, the issue set that quantum computers are acceptable at solving includes number or data crunching with an immense amount of inputs, for example, complex optimisation problems and communication systems analysis problemscalculations that would normally take supercomputers days, years, even billions of years to brute force.

The application that is routinely mentioned as an instance that quantum computers will have the option to immediately solve is solid RSA encryption. A recent report by the Microsoft Quantum Team recommends this could well be the situation, figuring that itd be feasible with around a 2330 qubit quantum computer.

Streamlining applications leading the pack makes sense well since theyre at present to a great extent illuminated utilizing brute force and raw computing power. If quantum computers can rapidly observe all the potential solutions, an ideal solution can become obvious all the more rapidly. Streamlining stands apart on the grounds that its significantly more natural and simpler to get a hold on.

The community of people who can fuse optimization and robust optimization is a whole lot bigger. The machine learning community, the coinciding between the innovation and the requirements are technical; theyre just pertinent to analysts. Whats more, theres a much smaller network of statisticians on the planet than there are of developers.

Specifically, the unpredictability of fusing quantum computing into the machine learning workflow presents an impediment. For machine learning professionals and analysts, its very easy to make sense of how to program the system. Fitting that into a machine learning workflow is all the more challenging since machine learning programs are getting very complex. However, teams in the past have published a lot of research on the most proficient method to consolidate it in a training workflow that makes sense.

Undoubtedly, ML experts at present need another person to deal with the quantum computing part: Machine learning experts are searching for another person to do the legwork of building the systems up to the expansions and demonstrating that it can fit.

In any case, the intersection of these two fields goes much further than that, and its not simply AI applications that can benefit. There is a meeting area where quantum computers perform machine learning algorithms and customary machine learning strategies are utilized to survey the quantum computers. This region of research is creating at such bursting speeds that it has produced a whole new field called Quantum Machine Learning.

This interdisciplinary field is incredibly new, however. Recent work has created quantum algorithms that could go about as the building blocks of machine learning programs, yet the hardware and programming difficulties are as yet significant and the development of fully functional quantum computers is still far off.

The future of AI sped along by quantum computing looks splendid, with real-time human-imitable practices right around an inescapable result. Quantum computing will be capable of taking care of complex AI issues and acquiring multiple solutions for complex issues all the while. This will bring about artificial intelligence all the more effectively performing complex tasks in human-like ways. Likewise, robots that can settle on optimised decisions in real-time in practical circumstances will be conceivable once we can utilize quantum computers dependent on Artificial Intelligence.

How away will this future be? Indeed, considering just a bunch of the worlds top organizations and colleges as of now are growing (genuinely immense) quantum computers that right now do not have the processing power required, having a multitude of robots mirroring humans running about is presumably a reasonable way off, which may comfort a few people, and disappoint others. Building only one, however? Perhaps not so far away.

Quantum computing and machine learning are incredibly well matched. The features the innovation has and the requirements of the field are extremely close. For machine learning, its important for what you have to do. Its difficult to reproduce that with a traditional computer and you get it locally from the quantum computer. So those features cant be unintentional. Its simply that it will require some time for the people to locate the correct techniques for integrating it and afterwards for the innovation to embed into that space productively.

Read more here:
The Well-matched Combo of Quantum Computing and Machine Learning - Analytics Insight

With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

Originally published in TechCrunch, March 16, 2020

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

To continue reading this article, click here.

Read more:
With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data...

Novi Releases v2.0 of Prediction Engine, Adding Critical Economics to Its Machine Learning Outputs – Benzinga

AUSTIN, Texas, March 23, 2020 /PRNewswire-PRWeb/ --Novi Labs ("Novi") today announced the release of Novi Prediction Engine version 2.0. This provides critical economic data to E&P workflows such as well planning or acquisition & divestitures. Novi customers can now run a wide range of large-scale scenarios in minutes and get immediate feedback on the economic feasibility of each plan. As price headwinds face the industry, having the ability to quickly and easily evaluate hundreds of scenarios allows operators to efficiently allocate capital.

In addition to the economic outputs, Novi Prediction Engine 2.0 also includes new features targeting enhanced usability and increased efficiency. Novi is now publishing confidence intervals as a standard output for every prediction. This allows customers to understand how confident the model is of each prediction it makes, which is critical decision-making criterion. A video demonstration of Novi Prediction Engine version 2.0 is available at https://novilabs.com/prediction-engine-v2/.

"With the integration of economic outputs and confidence intervals into Novi Prediction Engine, customers have increased leverage, transparency and certainty in what the Novi models are providing in support of their business decisions. This form of rapid scenario driven testing that is unlocked by the Novi platform is vital in today's uncertain market," said Scott Sherwood, Novi's CEO.

About Novi Labs Novi Labs, Inc. ("Novi") is the leading developer of artificial intelligence driven business applications that help the oil & gas industry optimize the economic value of drilling programs and acquisition & divestiture decisions. Leveraging cutting-edge data science, Novi delivers intuitive analytics that simplify complex decisions with actionable data and insights needed optimize capital allocation. Novi was founded in 2014 and is headquartered in Austin, TX. For more information, please visit http://www.novilabs.com.

SOURCE Novi Labs

Original post:
Novi Releases v2.0 of Prediction Engine, Adding Critical Economics to Its Machine Learning Outputs - Benzinga

University Students Are Learning To Collaborate on AI Projects – Forbes

Penn States Nittany AI Challenge is teaching students the true meaning of collaboration in the age of Artificial Intelligence.

Nittany AI Challenge LionPlanner

This year, artificial intelligence is the buzzword. On university campuses, students who just graduated high school are checking out the latest computer science course offerings to see if they can take classes in machine learning. The truth about the age of Artificial Intelligence has caught many university administrators attention. In the age of AI, to be successful, everyone, no matter what jobs, skill sets, or majors will at some point encounter AI in their work and their life.Penn Statesaw the benefits of working on AI projects early, specifically when it comes to teamwork and collaboration. Since 2017, their successfulNittany AI Challengeeach year, has helped to teach students what it means to collaborate in the age of Artificial Intelligence.

Every university has challenges. Students bring a unique perspective and understanding of these challenges. The Nittany AI Challenge was created to provide a framework and support structure to enable students to form teams and collaborate on ideas that could address a problem or opportunity, using AI technology as the enabler. The Nittany AI Challenge is our innovation engine, ultimately putting students on stage to demonstrate how AI and machine learning can be leveraged to have a positive impact on the university.

The Nittany AI Challenge runs for 8 months each year. It has multiple phases such as the idea phase, the prototype phase, and the MVP phase. In the end, theres a pitch competition between 5 to 10 teams and they compete for a pool of $25,000. The challenge incentivizes students to keep going by awarding the best teams at each phase of the competition with another combined total of $25,000 during the 8 months of competition. By the time pitching comes around for the top 5 to 10 teams, these teams not only have figured out how they can work together as a team, but they have also experienced what it means to receive funding.

This year, the Nittany AI Challenge has expanded from asking students to solve the universitys problems using AI to broader categories based on the theme of AI for Good. Students are competing in additional categories such as humanitarianism, healthcare, and sustainability/climate change.

In the last two years, students formed teams amongst friends within their circle. But, as the competition matured, now, theres an online system that allows students to sign up for teams.

Students often form teams with students coming from different backgrounds and majoring in different disciplines based on their shared interest on a project. Christie Warren, the app designer from theLionPlanner team, helped her team to create a 4-year degree planning tool that won 2018s competition. She credits the competition for giving her a clear path to a career in app design and teaching her how to collaborate with developers.

For me the biggest learning curve is to learn to work alongside developers, as far as when to start to go into the high fidelity designs, wait for people to figure out the features that need to be developed, etc. Just looking at my designs and being really open to improvements and going through iterations of the design with the team helped me overcome the learning curve.

Early on, technology companies such as Microsoft, Google Cloud, IBM Watson, and Amazon Web Services recognized the value of an on-campus AI competition such as the Nittany AI Competition to provide teamwork education to students before they embark on internships with technology companies. Theyve been sponsoring the competition since its inception.

Both the students and us from Microsoft benefit from the time working together, in that we learn about each others culture, needs and aspirations. Challenges like the Nittany AI Challenge highlight that studying in Higher Education should be a mix of learning and fun. If we can help the students learn and enjoy the experience then we also help them foster a positive outlook about their future of work.

While having fun, some students like Michael D. Roos, project manager and backend developer from the LionPlanner team have seen synergy between his internships and his project for the Nittany AI competition. He credits the competition as giving him a pathway to success beyond simply a college education. Hes a lot more confident stepping out into the real world whether its working for a startup or a large technology company because of the experience gained.

I was doing my internship with Microsoft during a part of the competition. Some of the technology skills I learned at my internship I could then apply to my project for the competition. Also, having the cumulative experience of working on the project for the Nittany AI competition before going into my internship helped me with my internship. Even though I was interning at Microsoft, my team had similar startup vibes as the competition, my role on the team was similar to my role on the project. I felt I had a headstart in that role because of my experience in the competition.

One of the biggest myths that the Nittany AI Challenge helped to debunk is that AI implementations require only the skills of technologists. While computer science students who take a keen interest in machine learning and AI are central to every project inside the Nittany AI Challenge, its often the people who are the visionary project managers, creative designers, and students who are majoring in other disciplines such as healthcare, biological sciences, and business who end up making the most impactful contributions to the team.

The AI Alliance makes the challenge really accessible. For people like me who dont know AI, we can learn AI along the way.

The LionPlanner Team that won the competition in 2018 contributes their success mainly to the outstanding design that won over the judges. Christie, the app designer on the team credits her success to the way the team collaborated which enabled her to communicate with developers effectively.

Nyansapo_Team_pic

Every member of the Nyansapo Team that is trying to bring English education to remote parts of Kenya via NLP learning software contributes their success to the energy and the motivation behind the vision of the project. Because everyone felt strongly about the vision, even though they have one of the biggest teams in the competition, everyones pulling together and collaborating.

I really like to learn by doing. Everybody on the team joined, not just because they had something to offer, but because the vision was exciting. We are all behind this problem of education inequality in Kenya. We all want to get involved to solve this problem. We are this excited to want to go the extra step.

Not only does the Nittany AI challenge teach students the art of interdisciplinary collaboration, but it also teaches students time management, stress management, and how to overcome difficulties. During the competition, students are often juggling difficult coursework, internships, and other extracurricular activities. They often feel stressed and overwhelmed. This can pose tremendous challenges for team communication. But, as many students pointed out to me, these challenges are opportunities to learn how to work together.

There was a difficult moment yesterday in between my classes, where I had to schedule a meeting with Edward to discuss the app interface later during the day, at times, everything can feel a bit chaotic. In the back of my head, when I think about the vision of our project, how much Im learning on the project, and how Im working with all my friends, these are the things that keep me going even through hard times.

One of the projects from the Nittany AI Challenge that the university is integrating into their systems is the LionPlanner tool. It uses AI algorithms to help students match their profiles with clubs and extracurricular activities they might be interested in. It also helps students plan their courses to customize their degree to allow them to complete on time while keeping the cost of their degree as low as possible.

The students who worked on the project are now working to create a Prospective Student Planning Tool that can integrate into the University Admissions Office systems to be used by transfer students.

Currently, in the U.S., theres a skill gap of almost 1.5 million high tech jobs. Companies are having a hard time hiring people who have the skills to work in innovative companies. We now have coding camps, apprenticeships, and remote coding platforms.

Why not also have university-sponsored AI challenges where students can demonstrate their potential and abilities to collaborate?

The Nittany AI Challenge from Penn State presents a unique solution in the age of innovation that many employees are trying to solve. By sitting in the audience as judges, companies can follow the teams progress and watch students shine in their perspective areas. Students are not pitching their skills. Students are pitching their work products. They are showing what they can do in real-time for 8 months.

This could be a new way for companies to recruit. We have NFL drafts. Why not have drafts for star players on these AI teams that work especially well with others?

This year, Penn State introduced the Nittany AI Associates program where students can continue their work from the Nittany AI Challenge so that they can develop their ideas further.

So while thechallengeis the "Innovation Engine", theNittanyAIAssociates provides students the opportunity to work on managed projects with an actual client, funding to the students to reduce their debt (paid internships), a low cost, low risk avenue for the university (and other clients) to innovate, while providingAIknowledge transfer to client staff (the student becomes the teacher).

In the age of AI, education is becoming more multidisciplinary. When higher education institutions can evolve the way that they teach their students to enable both innovation and collaboration, then the potential they unleash in their graduates can have an exponential effect on their career and the companies that hire them. Creating competitions and collaborative work projects such as the Nittany AI Challenge within the university that fosters win-win thinking might just be the key to the type of innovations we need in higher education to keep up in the age of AI.

Originally posted here:
University Students Are Learning To Collaborate on AI Projects - Forbes

Structure-based AI tool can predict wide range of very different reactions – Chemistry World

New software has been created that can predict a wide range of reaction outcomes but is also more flexible than other programs when it comes to dealing with completely different chemical problems.The machine-learning platform, which uses structure-based molecular representations instead of big reaction-based datasets, could find diverse applications in organic chemistry.

Although machine-learning methods have been widely used to predict the molecular properties and biological activities of target molecules, their application in predicting reaction outcomes has been limited because current models usually cant be transferred to different problems. Instead, complex parameterisation is required for each individual case to achieve good results. Researchers in Germany are now reporting a general approach that overcomes this limitation.

Previous models for accurately predicting reaction results have been highly complex and problem-specific, says Frank Glorius of the University of Mnster, Germany, who led the study. They are mostly based on a previously gained understanding of the underlying processes and cannot be transferred to other problems. In our approach, we use a universal representation of the involved compounds, which is solely based on their molecular structures. This allows for a general applicability of our program to diverse problem sets.

The new tool is based on the assumption that reactivity can be directly derived from a molecules structure and uses an input based on multiple fingerprint features as an all-round molecular representation. Frederik Sandfort, who also participated in the research, explains that organic compounds can be represented as graphs on which simple structural (yes/no) queries can be carried out. Fingerprints are number sequences based on the combination of many such successive queries, he says. They have originally been developed for structural similarity searches and were proven to be well-suited for application in computational models. We use a large number of different fingerprints to represent the molecular structure of each compound as accurately as possible.

Glorius points out that their platform is very versatile. While our model can be used to predict molecular properties, its most important application is the accurate prediction of reaction results, he says. We could predict enantioselectivities and yields with comparable accuracy to previous problem-specific models. Furthermore, the model was applied to predicting relative conversion based on a high-throughput data set which was never tackled using machine learning before.

The program is also easy to use, the researchers say. It only requires the input data in a very simple form and some problem-specific settings, explains Sandfort. He adds that the tool is already online and will be updated further with the teams most recent developments.

Robert Paton at Colorado State University and the Center for Computer Assisted Synthesis, US, who was not involved in the study, notes that machine-learning methods are being increasingly used to identify patterns in data that can help to predict the outcome of experiments. Chemists have managed to harness these techniques by converting molecular structures into vectors of numbers that can then be passed to learning algorithms, he says. Representations using information only from a molecules atoms and their connectivity are agnostic to the particular reaction and as a result may be used across multiple reaction types for different types of predictions. Future developments in interpreting these predictions a challenge shared by all machine learning approaches will be valuable.

Go here to see the original:
Structure-based AI tool can predict wide range of very different reactions - Chemistry World

Artificial intelligence for fraud detection is bound to save billions – ZME Science

Fraud mitigation is one of the most sought-after artificial intelligence (AI) services because it can provide an immediate return on investment. Already, many companies are experiencing lucrative profits thanks to AI and machine learning (ML) systems that detect and prevent fraud in real-time.

According to a new report, Highmark Inc.s Financial Investigations and Provider Review (FIPR) department generated $260 million in savings that would have otherwise been lost to fraud, waste, and abuse in 2019. In the last five years, the company saved $850 million.

We know the overwhelming majority of providers do the right thing. But we also know year after year millions of health care dollars are lost to fraud, waste and abuse, said Melissa Anderson, executive vice president and chief audit and compliance officer, Highmark Health. By using technology and working with other Blue Plans and law enforcement, we have continually evolved our processes and are proud to be among the best nationally.

FIPR detects fraud across its clients services with the help of an internal team made up of investigators, accountants, and programmers, as well as seasoned professionals with an eye for unusual activity such as registered nurses and former law enforcement agents. Human audits performed to detect unusual claims and assess the appropriateness of provider payments are used as training data for AI systems, which can adapt and react more rapidly to suspicious changing consumer behavior.

As fraudulent actors have become increasingly aggressive and cunning with their tactics, organizations are looking to AI to mitigate rising threats.

We know it is much easier to stop these bad actors before the money goes out the door then pay and have to chase them, said Kurt Spear, vice president of financial investigations at Highmark Inc.

Elsewhere, Teradata, an AI firm specialized in selling fraud detection solutions to banks, claims in a case study that it helped Danske Bank reduce its false positives by 60% and increased real fraud detection by 50%.

Other service operators are looking to AI fraud detection with a keen eye, especially in the health care sector. A recent survey performed by Optum found that 43% of health industry leaders said they strongly agree that AI will become an integral part of detecting telehealth fraud, waste, or abuse in reimbursement.

In fact, AI spending is growing tremendously with total operating spending set to reach $15 billion by 2024, the most sought-after solutions being network optimization and fraud mitigation. According to theAssociation of Certified Fraud Examiners (ACFE)inauguralAnti-Fraud Technology Benchmarking Report,the amount organizations are expected to spend on AI and machine learning to reduce online fraud is expected to triple by 2021.

Mitigating fraud in healthcare would be a boon for an industry that is plagued with many structural inefficiencies.

The United States spends about $3.5 trillion on healthcare-related services every year. This staggering sum corresponds to about 18% of the countrys GDP and is more than twice the average among developed countries. However, despite this tremendous spending, healthcare service quality is lacking. According to a now-famous 2017 study, the U.S. has fewer hospital beds and doctors per capita than any other developed country.

A 2019 study found that the countrys healthcare system is incredibly inefficient, burning through roughly 25% of all its finances which basically go to waste thats $760 billion annually in the best case scenario and up to $935 billion annually.

Most money is being wasted due to unnecessary administrative complexity, including billing and coding waste this alone is responsible for $265.6 billion annually. Drug pricing is another major source of waste, account for around $240 billion. Finally, over-treatment and failure of care delivery incurred another $300 billion in wasted costs.

And even these astronomical costs may be underestimated. According to management firm Numerof and Associates, the 25% waste estimate might be conservative. Instead, the firm believes that as much as 40% of the countrys healthcare spending is wasted, mostly due to administrative complexity. The firm adds that fraud and abuse account for roughly 8% of waste in healthcare.

Most cases of fraud in the healthcare sector are committed by organized crime groups and a fraction of some healthcare providers that are dishonest.

According to the National Healthcare Anti-Fraud Association, the most common types of healthcare frauds in the United States are:

Traditionally, the most prevalent method for fraud management has been human-generated rule sets. To this day, this is the most common practice but thanks to a quantum leap in computing and Big Data, AI-based solutions based on machine learning algorithms are becoming increasingly appealing and most importantly practical.

But what is machine learning anyway? Machine learning refers to algorithms that are designed learn like humans do and continuously tweak this learning process over time without human supervision. The algorithms output accuracy can be improved continuously by feeding them data and information in the form of observations and real-world interactions.

In other words, machine learning is the science of getting computers to act without being explicitly programmed.

There are all sorts of various machine learning algorithms, depending on the requirements of each situation and industry. Hundreds of new machine learning algorithms are published on a daily basis. Theyre typically grouped by:

In a healthcare fraud analytics context, machine learning eliminates the use of preprogrammed rule sets even those of phenomenal complexity.

Machine learning enables companies to efficiently determine what transactions or set of behaviors are most likely to be fraudulent, while reducing false positives.

In an industry where there can be billions of different transactions on a daily basis, AI-based analytics can be an amazing fit thanks to their ability to automatically discover patterns across large volumes of data.

The process itself can be complex since the algorithms have to interpret patterns in the data and apply data science in real-time in order to distinguish between normal behavior and abnormal behavior.

This can be a problem since an improper understanding of how AI works and fraud-specific data science techniques can lead you to develop algorithms that essentially learn to do the wrong things. Just like people can learn bad habits, so too can a poorly designed machine learning model.

In order for online fraud detection based on AI technology to succeed, these platforms need to check three very important boxes.

First, supervised machine learning algorithms have to be trained and fine-tuned based on decades worth of transaction data to keep false positives to a minimum and improve reaction time. This is harder said than done because the data needs to be structured and properly labeled depending on the size of the project, this could take staff even years to solve.

Secondly, unsupervised machine learning needs to keep up with increasingly sophisticated forms of online fraud. After all, AI is used by both auditors and fraudsters. And, finally, for AI fraud detection platforms to scale, they require a large-scale, universal data network of activity (i.e. transactions, filed documents, etc) to scale the ML algorithms and improve the accuracy of fraud detection scores.

According to a new market research report released earlier this year, the healthcare fraud analytics market is projected to reach $4.6 billion by 2025 from $1.2 billion in 2020.

This growth is attributed to more numerous and complex fraudulent activity in the healthcare sector.

In order to tackle rising healthcare fraud, companies offer various analytics solutions that flag fraudulent activity some are rule-based models, but AI-based technologies are expected to form the backbone of all types of analytics used in the future. These include descriptive, predictive, and prescriptive analytics.

Some of the most important companies operating today in the healthcare fraud analytics market include IBM Corporation (US), Optum (US), SAS Institute (US), Change Healthcare (US), EXL Service Holdings (US), Cotiviti (US), Wipro Limited (Wipro) (India), Conduent (US), HCL (India), Canadian Global Information Technology Group (Canada), DXC Technology Company (US), Northrop Grumman Corporation (US), LexisNexis Group (US), and Pondera Solutions (US).

That being said, there is a wide range of options in place today to prevent fraud. However, the evolving landscape of e-commerce and hacking pose new challenges all the time. To keep up, these challenges require innovation that can respond and react rapidly to fraud. The common denominator, from payment fraud to abuse, seems to be machine learning, which can easily scale to meet the demands of big data with far more flexibility than traditional methods.

See more here:
Artificial intelligence for fraud detection is bound to save billions - ZME Science