Professor in Computer Vision and Machine Learning job with CITY, UNIVERSITY OF LONDON | 232985 – Times Higher Education (THE)

CITY, UNIVERSITY OF LONDON

School of Mathematics, Computer Science & EngineeringComputer Science

Professor in Computer Vision and Machine Learning

SALARY: Competitive

Founded in 1894, City, University of London is a global university committed to academic excellence with a focus on business and the professions and an enviable central London location.

City attracts around 20,000 students (over 40% at postgraduate level) from more than 150 countries and staff from over 75 countries.

In the last decade City has almost tripled the proportion of its total academic staff producing world-leading or internationally excellent research.

Led by President, Professor Sir Paul Curran, City has made significant investments in its academic staff, its estate and its infrastructure and continues to work towards realising its vision of being a leading global university.

The School of Mathematics, Computer Science & Engineering is a multi-disciplinary centre of research and education located in the heart of Londons vibrant design community. It is proud of its research advances and of educating thousands of undergraduates and postgraduates in STEM subjects.

The Department of Computer Science has been at the leading edge of computer science in the UK for six decades. It awarded some of the countrys first Computer Science degrees and laid the groundwork for the foundation of the British Computer Society. Today, it is a vibrant, modern department comprising approximately 50 academic staff and 60 research staff and PhD students.

The School is seeking to appoint a Professor in Computer Vision and Machine Learning who will join the Research Centre for Adaptive Computing Systems and Machine Learning (ACS-ML) and collaborate closely with Tesco plc on research for the retail sector. The appointed candidate will lead and foster excellent research; contribute to the delivery of high quality undergraduate and postgraduate education in core Computer Science; and play a lead role in developing the partnership with Tesco and strengthening expertise in Computer Vision for the retail sector.

The successful candidate will have a PhD in Computer Science or an area related to machine learning, artificial intelligence or computer vision; an internationally recognised reputation in such an area; a track record of world-leading or internationally excellent research; and experience of delivering high quality education in core Computer Science. A track record of generating research income and of delivering consultancy or specialist services to external clients will also be required.

City offers a sector-leading salary, pension scheme and benefits including a comprehensive package of staff training and development.

The role is available immediately.

Closing date: Friday 11th December 2020

Interviews are scheduled for January 2021

For a confidential discussion, please contact Imogen Wilde on +44 (0)7864 652 633 or Elliott Rae on +44 (0)7584 078 534.

For further information, please visit http://www.andersonquigley.com/city-prof

Actively working to promote equal opportunity and diversityAcademic excellence for business and the professions

Link:
Professor in Computer Vision and Machine Learning job with CITY, UNIVERSITY OF LONDON | 232985 - Times Higher Education (THE)

Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models – MarkTechPost

Artificial intelligence (AI) and machine learning (ML) are the digital worlds trendsetters in recent times. Although ML models can make accurate predictions, the logic behind the predictions remains unclear to the users. Lack of evaluation and selection criteria make it difficult for the end-user to select the most appropriate interpretation technique.

How do we extract insights from the models? Which features should be prioritized while making predictions and why? These questions remain prevalent. Interpretable Machine Learning (IML) is an outcome of the questions mentioned above. IML is a layer in ML models that helps human beings understand the procedure and logic behind machine learning models inner working.

Ioannis Mollas, Nick Bassiliades, and Grigorios Tsoumakas have introduced a new methodology to make IML more reliable and understandable for end-users.Altruist, a meta-learning method, aims to help the end-user choose an appropriate technique based on feature importance by providing interpretations through logic-based argumentation.

The meta-learning methodology is composed of the following components:

Paper: https://arxiv.org/pdf/2010.07650.pdf

Github: https://github.com/iamollas/Altruist

Related

Consulting Intern: Grounded and solution--oriented Computer Engineering student with a wide variety of learning experiences. Passionate about learning new technologies and implementing it at the same time.

Continued here:
Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models - MarkTechPost

Efficient audits with machine learning and Slither-simil – Security Boulevard

by Sina Pilehchiha, Concordia University

Trail of Bits has manually curated a wealth of datayears of security assessment reportsand now were exploring how to use this data to make the smart contract auditing process more efficient with Slither-simil.

Based on accumulated knowledge embedded in previous audits, we set out to detect similar vulnerable code snippets in new clients codebases. Specifically, we explored machine learning (ML) approaches to automatically improve on the performance of Slither, our static analyzer for Solidity, and make life a bit easier for both auditors and clients.

Currently, human auditors with expert knowledge of Solidity and its security nuances scan and assess Solidity source code to discover vulnerabilities and potential threats at different granularity levels. In our experiment, we explored how much we could automate security assessments to:

Slither-simil, the statistical addition to Slither, is a code similarity measurement tool that uses state-of-the-art machine learning to detect similar Solidity functions. When it began as an experiment last year under the codename crytic-pred, it was used to vectorize Solidity source code snippets and measure the similarity between them. This year, were taking it to the next level and applying it directly to vulnerable code.

Slither-simil currently uses its own representation of Solidity code, SlithIR (Slither Intermediate Representation), to encode Solidity snippets at the granularity level of functions. We thought function-level analysis was a good place to start our research since its not too coarse (like the file level) and not too detailed (like the statement or line level.)

Figure 1: A high-level view of the process workflow of Slither-simil.

In the process workflow of Slither-simil, we first manually collected vulnerabilities from the previous archived security assessments and transferred them to a vulnerability database. Note that these are the vulnerabilities auditors had to find with no automation.

After that, we compiled previous clients codebases and matched the functions they contained with our vulnerability database via an automated function extraction and normalization script. By the end of this process, our vulnerabilities were normalized SlithIR tokens as input to our ML system.

Heres how we used Slither to transform a Solidity function to the intermediate representation SlithIR, then further tokenized and normalized it to be an input to Slither-simil:

Figure 2: A complete Solidity function from the contract TurtleToken.sol.

Figure 3: The same function with its SlithIR expressions printed out.

First, we converted every statement or expression into its SlithIR correspondent, then tokenized the SlithIR sub-expressions and further normalized them so more similar matches would occur despite superficial differences between the tokens of this function and the vulnerability database.

Figure 4: Normalized SlithIR tokens of the previous expressions.

After obtaining the final form of token representations for this function, we compared its structure to that of the vulnerable functions in our vulnerability database. Due to the modularity of Slither-simil, we used various ML architectures to measure the similarity between any number of functions.

Figure 5: Using Slither-simil to test a function from a smart contract with an array of other Solidity contracts.

Lets take a look at the function transferFrom from the ETQuality.sol smart contract to see how its structure resembled our query function:

Figure 6: Function transferFrom from the ETQuality.sol smart contract.

Comparing the statements in the two functions, we can easily see that they both contain, in the same order, a binary comparison operation (>= and <=), the same type of operand comparison, and another similar assignment operation with an internal call statement and an instance of returning a true value.

As the similarity score goes lower towards 0, these sorts of structural similarities are observed less often and in the other direction; the two functions become more identical, so the two functions with a similarity score of 1.0 are identical to each other.

Research on automatic vulnerability discovery in Solidity has taken off in the past two years, and tools like Vulcan and SmartEmbed, which use ML approaches to discovering vulnerabilities in smart contracts, are showing promising results.

However, all the current related approaches focus on vulnerabilities already detectable by static analyzers like Slither and Mythril, while our experiment focused on the vulnerabilities these tools were not able to identifyspecifically, those undetected by Slither.

Much of the academic research of the past five years has focused on taking ML concepts (usually from the field of natural language processing) and using them in a development or code analysis context, typically referred to as code intelligence. Based on previous, related work in this research area, we aim to bridge the semantic gap between the performance of a human auditor and an ML detection system to discover vulnerabilities, thus complementing the work of Trail of Bits human auditors with automated approaches (i.e., Machine Programming, or MP).

We still face the challenge of data scarcity concerning the scale of smart contracts available for analysis and the frequency of interesting vulnerabilities appearing in them. We can focus on the ML model because its sexy but it doesnt do much good for us in the case of Solidity where even the language itself is very young and we need to tread carefully in how we treat the amount of data we have at our disposal.

Archiving previous client data was a job in itself since we had to deal with the different solc versions to compile each project separately. For someone with limited experience in that area this was a challenge, and I learned a lot along the way. (The most important takeaway of my summer internship is that if youre doing machine learning, you will not realize how major a bottleneck the data collection and cleaning phases are unless you have to do them.)

Figure 7: Distribution of 89 vulnerabilities found among 10 security assessments.

The pie chart shows how 89 vulnerabilities were distributed among the 10 client security assessments we surveyed. We documented both the notable vulnerabilities and those that were not discoverable by Slither.

This past summer we resumed the development of Slither-simil and SlithIR with two goals in mind:

We implemented the baseline text-based model with FastText to be compared with an improved model with a tangibly significant difference in results; e.g., one not working on software complexity metrics, but focusing solely on graph-based models, as they are the most promising ones right now.

For this, we have proposed a slew of techniques to try out with the Solidity language at the highest abstraction level, namely, source code.

To develop ML models, we considered both supervised and unsupervised learning methods. First, we developed a baseline unsupervised model based on tokenizing source code functions and embedding them in a Euclidean space (Figure 8) to measure and quantify the distance (i.e., dissimilarity) between different tokens. Since functions are constituted from tokens, we just added up the differences to get the (dis)similarity between any two different snippets of any size.

The diagram below shows the SlithIR tokens from a set of training Solidity data spherized in a three-dimensional Euclidean space, with similar tokens closer to each other in vector distance. Each purple dot shows one token.

Figure 8: Embedding space containing SlithIR tokens from a set of training Solidity data

We are currently developing a proprietary database consisting of our previous clients and their publicly available vulnerable smart contracts, and references in papers and other audits. Together theyll form one unified comprehensive database of Solidity vulnerabilities for queries, later training, and testing newer models.

Were also working on other unsupervised and supervised models, using data labeled by static analyzers like Slither and Mythril. Were examining deep learning models that have much more expressivity we can model source code withspecifically, graph-based models, utilizing abstract syntax trees and control flow graphs.

And were looking forward to checking out Slither-simils performance on new audit tasks to see how it improves our assurance teams productivity (e.g., in triaging and finding the low-hanging fruit more quickly). Were also going to test it on Mainnet when it gets a bit more mature and automatically scalable.

You can try Slither-simil now on this Github PR. For end users, its the simplest CLI tool available:

Slither-simil is a powerful tool with potential to measure the similarity between function snippets of any size written in Solidity. We are continuing to develop it, and based on current results and recent related research, we hope to see impactful real-world results before the end of the year.

Finally, Id like to thank my supervisors Gustavo, Michael, Josselin, Stefan, Dan, and everyone else at Trail of Bits, who made this the most extraordinary internship experience Ive ever had.

Recent Articles By Author

*** This is a Security Bloggers Network syndicated blog from Trail of Bits Blog authored by Nol Ponthieux. Read the original post at: https://blog.trailofbits.com/2020/10/23/efficient-audits-with-machine-learning-and-slither-simil/

See the article here:
Efficient audits with machine learning and Slither-simil - Security Boulevard

AI and machine learning: A gift, and a curse, for cybersecurity – Healthcare IT News

The Universal Health Services attack this past month has brought renewed attention to the threat of ransomware faced by health systems and what hospitals can do to protect themselves against a similar incident.

Security experts say that the attack, beyond being one of the most significant ransomware incidents in healthcare history, may also be emblematic of the ways machine learning and artificial intelligence are being leveraged by bad actors.

With some kinds of "early worms," said Greg Foss, senior cybersecurity strategist at VMware Carbon Black, "we saw [cybercriminals] performing these automated actions, and taking information from their environment and using it to spread and pivot automatically; identifying information of value; and using that to exfiltrate."

The complexity of performing these actions in a new environment relies on "using AI and ML at its core," said Foss.

Once access is gained to a system, he continued, much malware doesn't require much user interference.But although AI and ML can be used to compromise systems' security, Foss said, they can also be used to defend it.

"AI and ML are something that contributes to security in multiple different ways," he said. "It's not something that's been explored, evenuntil just recently."

One effective strategy involves user and entity behavior analytics, said Foss: essentially when a system analyzes an individual's typical behavior and flags deviations from that behavior.

For example, a human resource representative abruptly running commands on their host is abnormal behavior and might indicate a breach, he said.

AI and ML can also be used to detect subtle patterns of behavior among attackers, he said. Given that phishing emails often play on a would-be victim's emotions playing up the urgency of a message to compel someone to click on a link Foss noted that automated sentiment analysis can help flag if a message seems abnormally angry.

He also noted that email structures themselves can be a so-called tell: Bad actors may rely on a go-to structure or template to try to provoke responses, even the content itself changes.

Or, if someone is trying to siphon off earnings or medication particularly relevant in a healthcare setting AI and ML can help work in conjunction with a supply chain to point out aberrations.

Of course, Foss cautioned, AI isn't a foolproof bulwark against attacks. It's subject to the same biases as its creators, and "those little subtleties of how these algorithms work allow them to be poisoned as well," he said. In other words, it, like other technology, can be a double-edged sword.

Layered security controls, robust email filtering solutions, data control and network visibility also play a vital role in keeping health systems safe.

At the end of the day, human engineering is one of the most important tools: training employees to recognize suspicious behavior and implement strong security responses.

Using AI and ML "is only starting to scratch the surface," he said.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

See the original post here:
AI and machine learning: A gift, and a curse, for cybersecurity - Healthcare IT News

Machine Learning Might Guide the Arrow of Time in Microscopic Processes – Science Times

In a microscopic context, fluctuations can cause phenomena that directly violate the second law of thermodynamics, leading observers to find the arrow of time being blurry and vague. However, a new machine-learning algorithm could help researchers in the future.

The second law of thermodynamics explains, in non-equilibrium states, an asymmetry that drives physical systems from one state to another. This law, concerning the evolution of physical systems, has been associated with the principle of cause preceding effect, or systems moving forward and backward in time - known as causality, or the arrow of time.

Unfortunately, researchers viewing a microscopic process - as in video playback - encounter difficulties and they can't tell apart whether they are watching it play normally or backward.

(Photo: Steve Johnson via Pexels.com)A classic wall-mounted clock. Clocks are often used as symbols of time, the direction of which has become a point of interest in studies of microscopic nature.

RELATED: 3 Characteristics of Water That Seem to Defy the Laws of Physics

A research team from the University of Maryland created their own machine-learning algorithm that can help determine where this thermodynamic arrow of time points. The details of their study are published in the journal Nature Physics.

"I learned about thermodynamics at small scales when I took a course on non-equilibrium statistical mechanics taught by Prof. Jarzysnki," explained Alireza Seif, one of the researchers behind the study, in a statement to Phys.org. He was referring to Christopher Jarzynski, from the Department of Physics at the University of Maryland and also an author in the study. Seif also shared that at the time, he was looking for applications of machine learning in physics, which has been a point of interest for recent studies.

Some applications of machine learning are in the classification of images into groups, and some are even being used for the classification of the phases of matter. It led Seif to try if the problem of determining the direction of the arrow of time can be identified as a classification problem. After discussing with Jarzysnki and Mohammad Hafezi, the three collaborated and found early success.

Sief explained that they used supervised learning and neural network training based on a set of simulated movies showing physical processes. Each file was labeled whether they were playing forward or backward. Additionally, the neural network returns a value - either zero or one - depending on the movie and the programmed parameters on the network such as weights and biases. Researchers then checked the value of these parameters that "minimizes the difference between the output of the neural network and the true labels," referring to the direction of the arrow.

Researchers then tested their neural network against a set of physical process videos, establishing that it can successfully distinguish the direction of the thermodynamic arrow of time. Furthermore, researchers also identified the dissipated work as the quantity to determine the direction.

Also, the researchers report in their study that they used a method called inceptionism. It was developed by Google software engineers, attempting to make neural networks display the results of its image generation and pattern recognition to the users. It allows users of a neural network to observe the progress made by the system.

Seif explained that previous works have quantified the physics behind the arrow of time within the context of nonequilibrium systems. "It is interesting that a well-known algorithm (logistic regression) that existed decades before these theorems lead to the same results."

RELATED:Deep Learning Model Outperforms NPC, Player Records in Gran Turismo

Check out more news and information on Thermodynamics in Science Times.

Go here to read the rest:
Machine Learning Might Guide the Arrow of Time in Microscopic Processes - Science Times

Bridging the Skills Gap for AI and Machine Learning – Integration Developers

Even as COVID-19 has slowed business investments worldwide, AI/ML spending is increasing. In a post for IDN, dotDatas CEO Ryohei Fujimaki, Ph.D, looks at the latest trends in AI/ML automation and how they will speed adoption across industries.

COVID-19 has impacted businesses across the globe, from closures to supply chain interruptions to resource scarcity. As businesses adjust to the new normal, many are looking to do more with less and find ways to optimize their current business investments.

In this resource-constrained environment, many types of business investments have slowed dramatically. That said, investments in AI and machine learning are accelerated, according to a recentAdweek survey.

Adweek found two-thirds of business executives say COVID-19 has not slowed AI projects. In fact, some 40% of respondents told Adweek that the pandemic has accelerated their AI/ML efforts. Reasons for the sustained and growing interest in AI/ML include decreasing costs, improving performance, and increasing efficiencies-all efforts to make up for time and output lost during the COVID-19 slowdown.

Despite the rosy outlook for AI/ML investments, it bears mentioning that businesses also admit they still struggle to scale these technologies beyond PoCs (proof of concepts). This is due to an ongoing talent shortage in the data science field a shortage that COVID has made even more acute.

Data science is an interdisciplinary approach that requires cross-domain expertise, including mathematics, statistics, data engineering, software engineering, and subject matter expertise.

The shortage of data scientists as well as data architects, machine learning engineers skilled in building, testing, and deploying ML models has created a big challenge for businesses implementing AI and ML initiatives, limiting the scale of data science projects and slowing time to production. The scarcity of data scientists has also created a quandary for organizations: how can they change the way they do data science, empowering the teams they already have?

The democratization of data science is very important and a current industry trend, but true democratization has never been easy for organizations. Analytics and data science leaders lament their team's ability to only manage a few projects per year. BI leaders, on the other hand, have been trying to embed predictive analytics in their dashboards but face the daunting task of learning how to build AI/ML models. What can organizations do, what tactics will help them to scale AI initiatives and bridge the gap between what is required and what's available?

Democratization of data science in a true sense is to empower teams with advanced analytical tools and automation technologies.

These tools can significantly simplify tasks that formerly could only be completed by data scientists. They are empowering business analysts, BI developers and data engineers to execute AI and machine learning projects. Further, they accelerate data science processes with very little training.

Notable among these offerings are:

This class of automation tools removes much of the time and expense to design and deploy AI-powered analytics pipelines and do so little cost and without high-priced technical staff.

Today, s typical data team is interdisciplinary and consists of data engineers, data analysts and data scientists. The data analyst and engineer are responsible for cleaning, formatting and preparing data for the data scientist who then uses analytics-ready data to build features and then build ML models using a trial and error approach.

Data science processes are complicated, highly manual, and iterative in nature. Depending on the maturity of the data pipelines, a data science project can take from 30 to 90 days to complete with nearly 80% of the effort spent on AI-focused data preparation and Feature Engineering.

Further, the AI-focused data preparation process requires an impressive amount of hacking skills from developers, data scientists and data engineers to clean, manipulate and transform the data to enable data scientists to execute feature engineering.

That said, the landscaping is changing. Tools are now surfacing to deliver AI automation to pre-process data, connect to data and automatically build features and ML models. These results eliminate the need for having a large team and doing it efficiently at the greatest possible speed.

In addition, feature engineering automation has vast potential to change the traditional data science process. Feature engineering involves the application of business knowledge, math, and statistics to transform data into a format that can be directly consumed by machine learning models.

It also can significantly lower skill barriers beyond ML automation alone, eliminating hundreds or even thousands of manually-crafted SQL queries, and ramps up the speed of the data science project even without a full light of domain knowledge).

Organizations with large data science teams will also find automation platforms very valuable. They free up highly-skilled resources from many of the manual and time-consuming efforts involved in data science and machine learning workflow and allow them to focus on more complex and challenging strategic tasks.

The trend is definitely to leverage automation technologies to speed-up the ML development process. By using AI automation technologies, BI and junior data scientist can automatically build models. This frees up time for experienced data scientists who take on more challenging business problems. While everyone seemed to focus on building automated ML models, the industry is definitely moving towards automating the entire AI/ML workflow.

This empowers data scientists to achieve higher productivity and drive greater business impact than ever before.

Another important tactic for bridging the skills gap in data science is ongoing skills training for the AI, data science and business intelligence teams.

Rather than hiring outside talent from an already shallow talent pool, companies are often better off investing time and resources in data-science training of their existing talent pool. These citizen data scientists can bridge the skill gap, address the labor shortage and enable companies to leverage the existing resources they already have.

There are many advantages to this approach.

Theidea is to build a team from inside the company versus hiring experts from outside. Any transformation is only going to succeed, provided it is embraced by the vast majority. Creating internal AI teams, empowering citizen data scientists and scaling pilot programs focused on AI is the right approach.

One of the most important of which is building data science skills across multiple teams to support data science's democratization across the organization. This strategy can be implemented by first identifying employees with existing programming, analytical and quantitative skills and then augmenting those skills with the required data science skills and tools training. Experienced data scientists can play the role of an evangelizer to share data science best practices and guide the citizen data scientists through the process.

AI and ML-driven innovation becomes indispensable as more enterprises transform themselves into data-driven organizations. Building a strong analytics team, while challenging in todays resource-scarce environment, is attainable by using appropriate automation tools. The benefits of this approach include:

These factors can not only help fill the skills gap but will help accelerate both data science and business innovation, delivering greater and broader business impact.

More here:
Bridging the Skills Gap for AI and Machine Learning - Integration Developers

Machine Learning and AI Can Now Create Plastics That Easily Degrade – Science Times

Plastic pollutionis one of the most pressing environmental issues, and the increase in the production of disposable plastics does not help at all. These plastics would often take many years before they degrade, which poisons the environment. This has prompted efforts from nations to create a global treaty to help reduce plastic pollution.

A combination of machine learning and artificial intelligence has accelerated the design of making materials, including plastics, with properties that quickly degrade without harming the environment and super-strong lightweight plastics for aircraft and satellites that would one day replace the metals being used.

The researchers from the Pritzker School of Molecular Engineering (PME) at the University of Chicago published their study in Science Advances on October 21, which shows a way toward designing polymers using a combination of modeling and machine learning.

This is done through computational structuring of almost 2,000 hypothetical polymers that are large enough to train neural links that understand a polymer's properties.

(Photo: Pixabay)Machine Learning and AI Can Now Create Plastics That Easily Degrade

People have been using products with polymer, like plastic bottles, for so long as this material is very common in many things in the daily lives of humans.

Polymers are materials that have amorphous and disordered structures that even techniques for studying metals and crystalline materials developed by scientists have a hard time defining it. They are made of large atoms arranged in a very long string that might compromise millions of monomers.

Moreover, the length and sequence can affect the polymer molecule's properties that may vary depending on which the atoms are arranged. Due to that, a trial-and-error method will not be ideal to use because it is only limited, and generating the needed data for a rational design strategy would be very demanding, Phys.orgreported.

Fortunately, machine learning could solve this problem as researchers set to answer whether machine learning and AI can predict the properties of polymers based on their sequence. If this might be the case, how large of a dataset would be needed to teach underlying algorithms.

Read Also: P&G Aims to Halve Its Use of Virgin Petroleum Plastics by 2030: Here's How It Plans to Do So

The researchers used almost 2,000 computationally structured polymers that have different sequences in creating the database. They also ran molecular simulations to predict its behavior.

Juan de Pablo, Liew Family Professor of Molecular Engineering and lead researcher, said that they are unsure how many are the different polymer sequences needed to learn its behavior as it could be millions. Fortunately, only a few hundred would do, which means that they can now follow the same technique ad create a database to train the machine learning network.

Then the researchers proceeded to use the data that was learned in making the actual design of the new molecules. They were able to demonstrate to specify a desired property from the polymer, and using machine learning generated a set of polymer sequences that lead to specific properties.

Through this, companies can now design products that save the environment and design polymers that do exactly what they want to do. For instance, they could create polymers that could someday replace the metals used in aerospace or those used in biomedical devices. It could allow engineers to more affordable and sustainable polymer materials.

Read More: Unique Enzyme Combination Could Reduce Global Plastic Waste

Check out more news and information on Plastic Pollutionon Science Times.

Read the rest here:
Machine Learning and AI Can Now Create Plastics That Easily Degrade - Science Times

Revolutionizing IoT with Machine Learning at the Edge | Perceive’s Steve Teig – IoT For All

In episode 88 of the IoT For All Podcast, Perceive Founder and CEO Steve Teig joins us to talk about how Perceive is bringing the next wave of intelligence to IoT through machine learning at the edge. Steve shares how Perceive developed Ergo, their chip announced back in March, and how these new machine learning capabilities will transform consumer IoT.

Steve Teig is an award-winning technologist, entrepreneur, and inventor on 388 US patents. Hes been the CTO of three EDA software companies, two biotech companies, and a semiconductor company of these, two went public during his tenure, two were acquired, and one is a Fortune 500 company. As the CEO and Founder of Perceive, Steve is leading a team building solutions and transformative machine learning technology for consumer edge devices.

To start the episode, Steve gave us some background on how Perceive got started. While serving as CTO of Xperi, Steve worked with a wide array of imaging and audio products and saw an opportunity in making the edge smart by leveraging machine learning at the edge. What if you could make gadgets themselves intelligent? Steve asked, thats what motivated me to pursue it technically and then commercially with Perceive.

At its core, Perceive builds chips and machine learning software for edge inference, providing data center class accuracy at the low power that edge devices, like IoT, require. The kinds of applications we go after, Steve said, are from doorbell cameras to home security cameras, to toys, to phones wherever you have a sensor, it would be cool to make that sensor understand its environment without sending data to the cloud.

Of the current solutions for device intelligence, Steve said you have two options and neither of them are ideal: first, you can send all of the data your sensor collects to someone elses cloud, giving up your privacy; or second, you can have a tiny chip that, while low power enough for your device, doesnt provide the computing power to provide answers you can actually trust.

We fix that problem by providing the kind of sophistication you would expect from the big cloud providers, but low enough power that you can run it at the edge, Steve said, saying that their chip is 20 to 100 times more power efficient than anything else currently in the market.

Steve also spoke to some of the use cases that Ergo enables. Currently, the main applications are doorbell cameras, home security cameras, and appliances. As we look forward, Steve said, being able to put really serious contextual awareness into gadgets opens up all kinds of applications. One of the examples he gave was a microwave that could identify both the user and the food to be heated, and adjust its settings to match that users preferences. Another example would be a robot vacuum cleaner that you could ask to find your shoes.

Changing gears, Steve shared Perceives philosophy on machine learning, saying that because they were looking to make massive improvements they had to start fresh. We had to start with the math. We really started from first principles. That philosophy has led to a number of new and proprietary techniques, both on the software and hardware side.

Moving more into the industry at large, Steve shared some observations in the smart home space during the pandemic. Those observations highlighted two somewhat conflicting viewpoints while there has been a broader interest in smart home technology, with people spending more time at home, people have also become more sensitive about their privacy. Steve also shared how Ergo handles data, in order to meet these security and privacy concerns.

To close out the episode, Steve shared some of the challenges his team faced while developing Ergo and what those challenges meant as he built out the team itself. He also shared some of his thoughts on the future of the smart home and consumer IoT space, with the introduction of these new machine learning capabilities.

Interested in connecting with Steve? Reach out to him on Linkedin!

About Perceive: Steve Teig, founder and CEO of Perceive, drove the creation of the company in 2018 while CTO of its parent company and investor, Xperi. Launching Perceive, Steve and his team had the ambitious goal of enabling state-of-the-art inference inside edge devices running at extremely low power. Adopting an entirely new perspective on machine learning and neural networks allowed Steve and his team to very quickly build and deploy the software, tools, and inference processor Ergo that make the complete Perceive solution.

(00:50) Intro to Steve

(01:25) How did you come to found Perceive?

(02:30) What does Perceive do? Whats your role in the IoT space?

(03:37) What makes your offering unique to the market?

(04:49) Could you share any use cases?

(09:41) How would you describe your philosophy when it comes to machine learning?

(11:37) What is Ergo and what does it do?

(12:39) What does a typical customer engagement look like?

(14:57) Have you seen any change in demand due to the pandemic?

(20:47) What challenges have you encountered building Perceive and Ergo?

(22:24) Where do you see the market going for smart home devices?

Read the rest here:
Revolutionizing IoT with Machine Learning at the Edge | Perceive's Steve Teig - IoT For All

insitro Strengthens Machine Learning-Based Drug Discovery Capabilities with Acquisition of Haystack Sciences – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--insitro, a machine learning driven drug discovery and development company, today announced the acquisition of Haystack Sciences, a private company advancing proprietary methods to drive machine-learning enabled drug discovery. Haystacks approach focuses on synthesizing, breeding and analyzing large, diverse combinatorial chemical libraries encoded by unique DNA sequences called DNA-encoded libraries, or DELs. Financial details of the acquisition are not disclosed.

insitro is building the leading company at the intersection of machine learning and biological data generation at scale, with a core focus on applying these technologies for more efficient drug discovery. With the acquisition of Haystack, insitro will leverage the companys DEL technology to collect massive small molecule data sets that inform the construction of machine learning models able to predict drug activity from molecular structure. With the addition of the Haystack technology and team, insitro has taken a significant step towards building in-house capabilities for fully integrated drug discovery and development. insitros capabilities in this space are being further developed via a collaboration with DiCE Molecules, a leader in the DEL field. The collaboration, executed earlier this year, is aimed at combining the power of machine learning with high quality DEL datasets to address two difficult protein-protein interface targets that DiCE is pursuing.

We are thrilled to have the Haystack team join insitro, said Daphne Koller, Ph.D., founder and chief executive officer of insitro. For the past two years, insitro has been building a company focused on the creation of predictive cell-based models of disease in order to enable the discovery of novel targets and evaluate the benefits of new or existing molecules in genetically defined patient segments. This acquisition enables us to expand our capabilities to the area of therapeutic design and advances us towards our goal of leveraging machine learning across the entire process of designing and developing better medicines for patients.

Haystacks platform combines multiple elements, including the capability to synthetize broad, diverse, small molecule collections, the ability to execute rapid iterative follow-up, and a proprietary semi-quantitative screening technology, called nDexer, that generates higher resolution datasets than possible through conventional panning approaches. These capabilities will greatly enable insitros development of multi-dimensional predictive models for small molecule design.

The nDexerTM capabilities we have advanced at Haystack, combined with insitros state of the art machine learning models, will enable us to build a platform at the forefront of applying DEL technology to next-generation therapeutics discovery, said Richard E. Watts, co-founder and chief executive officer of Haystack Sciences who will be joining insitro as vice president, high-throughput chemistry. I am excited by the opportunity to join a company with such a uniquely open and collaborative culture and to work with and learn from colleagues in data science, machine learning, automation and cell biology. The capabilities enabled by joining our efforts are considerably greater than the sum of the parts, and I look forward to helping build core drug discovery efforts at insitro.

Haystacks best-in-class DEL technology is uniquely aligned with insitros philosophy of addressing the critical challenges in pharmaceutical R&D through predictive machine learning models, all enabled by producing quality data at scale, said Vijay Pande, Ph.D., general partner at Andreessen Horowitz and member of insitros board of directors. This investment will power insitros swift prosecution of the multiple targets emerging from their platform, as well as the creation of a computational platform for molecule structure and function optimization. Having seen the field of computationally driven molecule design mature over the past twenty years, I look forward to the next chapter in therapeutics design written by the combined efforts of insitro and Haystack.

About insitro

insitro is a data-driven drug discovery and development company using machine learning and high-throughput biology to transform the way that drugs are discovered and delivered to patients. The company is applying state-of-the-art technologies from bioengineering to create massive data sets that enable the power of modern machine learning methods to be brought to bear on key bottlenecks in pharmaceutical R&D. The resulting predictive models are used to accelerate target selection, to design and develop effective therapeutics, and to inform clinical strategy. The company is located in South San Francisco, CA. For more information on insitro, please visit the companys website at http://www.insitro.com.

About Haystack Sciences

Haystack Sciences seeks to inform and speed drug discovery by acquiring data of best-in-class accuracy and dimensionality from DNA Encoded Libraries (DELs). This is enabled by proprietary technologies for in vitro evolution of fully synthetic small molecules and high throughput mapping of structure-activity relationships for selection of molecules with drug-like properties. The companys technologies, including their nDexer platform, allow for generation of better libraries and quantification of binding affinities of entire DELs against a given target in parallel. The combination of these approaches with machine learning has the potential to greatly accelerate the discovery of optimized drug candidates. Haystack Sciences is based in South San Francisco, California. It was incubated at the Illumina Accelerator and is backed by leading investors including Viking Global Investors, Nimble Ventures, HBM Genomics, and Illumina. More information is available at: http://www.haystacksciences.com/

See the original post here:
insitro Strengthens Machine Learning-Based Drug Discovery Capabilities with Acquisition of Haystack Sciences - Business Wire

Artificial Intelligence and Machine Learning Industry Market Analysis with Key Players, Applications, Trends and Forecasts to 2025 – AlgosOnline

Market Study Report LLC adds a new report on Artificial Intelligence and Machine Learning Industry Market Share for 2020-2025. This report provides a succinct analysis of the market size, revenue forecast, and the regional landscape of this industry. The report also highlights the major challenges and current growth strategies adopted by the prominent companies that are a part of the dynamic competitive spectrum of this business sphere.

The research report on Artificial Intelligence and Machine Learning Industry market report comprises of an in-depth analysis of this industry vertical. The key trends that describe the Artificial Intelligence and Machine Learning Industry market during the forecast period are cited in the document, alongside additional factors including industry policies and regional scope. Moreover, the study specifies the impact of prevailing industry trends on potential investors.

Request a sample Report of Artificial Intelligence and Machine Learning Industry Market at:https://www.marketstudyreport.com/request-a-sample/2792420?utm_source=algosonline.com&utm_medium=SK

COVID-19, the disease it causes, surfaced in late 2020, and now had become a full-blown crisis worldwide. Over fifty key countries had declared a national emergency to combat coronavirus. With cases spreading, and the epicentre of the outbreak shifting to Europe, North America, India and Latin America, life in these regions has been upended the way it had been in Asia earlier in the developing crisis. As the coronavirus pandemic has worsened, the entertainment industry has been upended along with most every other facet of life. As experts work toward a better understanding, the world shudders in fear of the unknown, a worry that has rocked global financial markets, leading to daily volatility in the U.S. stock markets.

The report also provides with an overview of the competitive landscape along with a thorough analysis of the raw materials as well as the downstream buyers.

Revealing a summary of the competitive analysis of Artificial Intelligence and Machine Learning Industry market:

An overview of the regional scope of the Artificial Intelligence and Machine Learning Industry market:

Ask for Discount on Artificial Intelligence and Machine Learning Industry Market Report at:https://www.marketstudyreport.com/check-for-discount/2792420?utm_source=algosonline.com&utm_medium=SK

Other takeaways from the Artificial Intelligence and Machine Learning Industry market report:

Significant Features that are under Offering and Key Highlights of the Reports:

Key questions answered in the report:

For More Details On this Report: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-artificial-intelligence-and-machine-learning-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Reports:

1. Global Runtime Application Self-Protection Market Report 2020 by Key Players, Types, Applications, Countries, Market Size, Forecast to 2026 (Based on 2020 COVID-19 Worldwide Spread)Read More: https://www.marketstudyreport.com/reports/global-runtime-application-self-protection-market-report-2020-by-key-players-types-applications-countries-market-size-forecast-to-2026-based-on-2020-covid-19-worldwide-spread

2. Global Collections Management Software Market Report 2020 by Key Players, Types, Applications, Countries, Market Size, Forecast to 2026 (Based on 2020 COVID-19 Worldwide Spread)Read More: https://www.marketstudyreport.com/reports/global-collections-management-software-market-report-2020-by-key-players-types-applications-countries-market-size-forecast-to-2026-based-on-2020-covid-19-worldwide-spread

Related Report : https://www.marketwatch.com/press-release/biosensors-market-set-to-expand-to-reach-usd-31270-million-at-96-cagr-by-2025-2020-10-23

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

Link:
Artificial Intelligence and Machine Learning Industry Market Analysis with Key Players, Applications, Trends and Forecasts to 2025 - AlgosOnline