A look at some of the AI and ML expert speakers at the iMerit ML DataOps Summit – TechCrunch

Calling all data devotees, machine-learning mavens and arbiters of AI. Clear your calendar to make room for the iMerit ML DataOps Summit on December 2, 2021. Join and engage with AI and ML leaders from multiple tech industries, including autonomous mobility, healthcare AI, technology and geospatial to name just a few.

Attend for free: Theres nothing wrong with your vision the iMerit ML DataOps Summit is 100% free, but you must register here to attend.

The summit is in partnership with iMerit, a leading AI data solutions company providing high-quality data across computer vision, natural language processing and content that powers machine learning and artificial intelligence applications. So, what can you expect at this free event?

Great topics require great speakers, and well have those in abundance. Lets highlight just three of the many AI and ML experts who will take the virtual stage.

Radha Basu: The founder and CEO of iMerit leads an inclusive, global workforce of more than 5,300 people 80% of whom come from underserved communities and 54% of whom are women. Basu has raised $23.5 million from investors, led the company to impressive revenue heights and has earned a long list of business achievements, awards and accolades.

Hussein Mehanna: Currently the head of Artificial Intelligence and Machine Learning at Cruise, Mehanna has spent more than 15 years successfully building and leading AI teams at Fortune 500 companies. He led the Cloud AI Platform organization at Google and co-founded the Applied Machine Learning group at Facebook, where his team added billions of revenue dollars.

DJ Patil: The former U.S. Chief Data Scientist, White House Office of Science and Technology Policy, Patils experience in data science and technology runs deep. He has held high-level leadership positions at RelateIQ, Greylock Partners, Color Labs, LinkedIn and eBay.

The iMerit ML DataOps Summit takes place on December 2, 2021. If your business involves data-, AI- and ML-driven technologies, this event is made for you. Learn, network and stay current with this fast-paced sector and do it for free. All you need to do is register. Start clicking.

See more here:
A look at some of the AI and ML expert speakers at the iMerit ML DataOps Summit - TechCrunch

As machine learning becomes standard in military and politics, it needs moral safeguards | TheHill – The Hill

Over the past decade, the world has experienced a technological revolution powered by machine learning (ML). Algorithms remove the decision fatigue of purchasing books and choosing music, and the work of turning on lights and driving, allowing humans to focus on activities more likely to optimize their sense of happiness. Futurists are now looking to bring ML platforms to more complex aspects of human society, specifically warfighting and policing.

Technology moralists and skeptics aside, this move is inevitable, given the need for rapid security decisions in a world with information overload. But as ML-powered weapons platforms replace human soldiers, the risk of governments misusing ML increases. Citizens of liberal democracies can and should demand that governments pushing for the creation of intelligent machines for warfighting include provisions maintaining the moral frameworks that guide their militaries.

In his popular book The End of History, Francis Fukuyama summarized debates about the ideal political system for achieving human freedom and dignity. From his perspective in the middle of 1989, months before the unexpected fall of the Berlin Wall, no other systems like democracy and capitalism could generate wealth, pull people out of poverty and defend human rights; both communism and fascism had failed, creating cruel autocracies that oppressed people. Without realizing it, Fukuyama prophesied democracys proliferation across the world. Democratization soon occurred through grassroots efforts in Asia, Eastern Europe and Latin America.

These transitions, however, wouldnt have been possible unless the military acquiesced to these reforms. In Spain and Russia, the military attempted a coup before recognizing the dominant political desire for change. China instead opted to annihilate reformers.

The idea that the military has veto power might seem incongruous to citizens of consolidated democracies. But in transitioning societies, the military often has the final say on reform due to its symbiotic relationship with the government. In contrast, consolidated democracies benefit from the logic of Clausewitzs trinity, where there is a clear division of labor between the people, the government and the military. In this model, the people elect governments to make decisions for the overall good of society while furnishing the recruits for the military tasked with executing government policy and safeguarding public liberty. The trinity, though, is premised on a human military with a moral character that flows from its origins among the people. The military can refuse orders that harm the public or represent bad policy that might lead to the creation of a dictatorship.

ML risks destabilizing the trinity by removing the human element of the armed forces and subsuming them directly into the government. Developments in ML have created new weapons platforms that rely less and less on humans, as new warfighting machines are capable of provisioning security or assassinating targets with only perfunctory human supervision. The framework of machines acting without human involvement risks creating a dystopian future where political reform will become improbable, because governments will no longer have human militaries restraining them from opening fire on reformers. These dangers are evident in China, where the government lacks compunction in deploying ML platforms to monitor and control its population while also committing genocide.

In the public domain, there is some recognition of these dangers on the misuses of ML for national security. But there hasnt been a substantive debate about how ML might shape democratic governance and reform. There isnt a nefarious reason for this. Rather its that many of those who develop ML tools have STEM backgrounds and lack an understanding of broader social issues. From the government side, leaders in agencies funding ML research often dont know how to consume ML outputs, relying instead on developers to explain what theyre seeing for them. The governments measure for success is whether it keeps society safe. Throughout this process, civilians operate as bystanders, unable to interrogate the design process for ML tools used for war.

In the short term, this is fine because there arent entire armies made of robots, but the competitive advantage offered by mechanized fighting not limited by frail human bodies will make intelligent machines essential to the future of war. Moreover, these terminators will need an entire infrastructure of satellites, sensors, and information platforms powered by ML to coordinate responses to battlefield advances and setbacks, further reducing the role of humans. This will only amplify the power governments have to oppress their societies.

The risk that democratic societies might create tools that lead to this pessimistic outcome is high. The United States is engaged in an ML arms race withChina and Russia, both of which are developing and exporting their own ML tools to help dictatorships remain in power and freeze history.

There is space for civil society to insert itself into ML, however. ML succeeds and fails based on the training data used for algorithms, and civil society can collaborate with governments to choose training data that optimizes the warfighting enterprise while balancing the need to sustain dissent and reform.

By giving machines moral safeguards, the United States can create tools that instead strengthen democracys prospects. Fukuyamas thesis is only valid in a world where humans can exert their agency and reform their governments through discussion, debate and elections. The U.S., in the course of confronting its authoritarian rivals, shouldnt create tools that hasten democracys end.

Christopher Wall is a social scientist for Giant Oak, a counterterrorism instructor for Naval Special Warfare, a lecturer on statistics for national security at Georgetown University and the co-author of the recent book, The Future of Terrorism: ISIS, al-Qaeda, and the Alt-Right. Views of the author do not necessarily reflect the views of Giant Oak.

Original post:
As machine learning becomes standard in military and politics, it needs moral safeguards | TheHill - The Hill

Tech Trends: Newark-based multicultural podcast producer, ABF Creative, leverages machine learning and AI – ROI-NJ.com

Serial entrepreneur and Newark startup activist Anthony Frasier has hitched his wagon to the skyrocketing podcasting industry. His podcast network is producing cutting-edge work that connects him to his roots and his community.

Frasier knows something about community. Actually, its been his lifes work. He began by creating an online community for Black gamers to discuss computer gaming; next, he founded the Brick City Tech Meetup, at a time when there really was no tech community in Newark; and then he organized big tech events. Now, as founder and CEO of ABF Creative, based in Newark, Frasier is reaching out to the larger multicultural community through voice.

He has found his niche in podcasting. ABF Creative won a Webby in the Diversity and Inclusion category in 2021 for producing a podcast series called African Folktales. A few years before, Frasier had been an entrepreneur in residence at Newark Venture Partners; and, during the pandemic, ABF Creative joined the NVP Labs accelerator.

The company recently graduated from the fifth cohort of the Multicultural Innovation Lab at Morgan Stanley. That startup accelerator promotes financial inclusion and provides access to capital for early-stage technology and tech-enabled companies led by women and multicultural entrepreneurs. It was the first accelerator accepted into New Jersey Economic Development Authoritys NJ Accelerate program in late 2020.

Success has come to Frasier as the podcasting industry has skyrocketed. The total market size in the U.S., as calculated by Grand View Research, based in San Francisco, is projected to be more than $14 billion in 2021. ABF Creative was an early entrant into the process.

We happened to plant our flag a lot earlier than a lot of people, and so were starting to see the benefits, Frasier told NJTechWeekly.com.

He discussed the companys data-driven approach to podcasting.

ABF Creative is partnering with Newark-based machine learning startup Veritonic, another NVP Labs company.

Said Frasier: We leverage machine learning and AI (artificial intelligence) to use predictive analysis. We basically test voices, sounds and music against the potential audience before we release the podcast. And, so, that gives us an edge, especially when it comes to making content specifically for people of color. I think its a unique approach. We are testing this approach out and, so far, so good. Our first podcast that we used this technology with was African Folktales, which is the podcast for which we just won the Webby award.

(Being successful) really came from those relationships that we built for years doing events and doing a lot of local tech community stuff. I was able to leverage a lot of that goodwill that built up to get our first two customers. And, if we didnt have those first two customers, I dont think we would be where we are today. Anthony Frasier, CEO, ABF Creative

Frasier gave NJTechWeekly.com the origin story for the African Folktales podcasts.

During COVID, kids were in the house, he said. We were thinking, What could be something unique and interesting we can do for kids who are at home? So, we decided to use African folktales. For one thing, theres an abundance of them. Then, we decided to create this world around them, with this fictional teacher called Ms. JoJo, whos like the black Ms. Frizzle, if you ever watched The Magic School Bus.

And, so, we put it out, and the reception was amazing. People fell in love with it. Parents listened to it with their children. We get emails all the time. Its part of peoples nightly rituals, and were just fascinated by the result. That just shows that machine learning and what were doing is the future of content creation.

The Veritonic technology lets ABF Creative know what works better for audience members, he said. For example, you want your podcast to have a memorable voice. Should you use the voice of an older woman or a younger woman, or an older male or a younger man?

We have to test those different assumptions out and look at the score that comes back, Frasier said. From that, were able to determine which way we want to go.

However, theres more to it than just the score, Frasier explained.

For example, maybe this voice scores high in recall value, but (it scores) low in energy, he said. So, you have to play a game of tug of war with the results that come back, but, once you find the combination that you want, its easier to choose the voice youll use.

He added that ABF Creative is hanging its hat on data-driven decisions when it comes to content creation.

We feel like this is revolutionizing the way people produce podcasts, he said. What can be a bigger waste of time and money than to go off of a hunch, on what you think would work, spending thousands, if not hundreds of thousands, of dollars to produce something and then you put it out and nobody likes it? All we want to do is be a little bit smarter, and we want to make content creation smarter.

Frasier believes that, by using machine learning and AI, ABF Creative is doing good for the listening community; and the company is backing this belief up through its own polls and research. It is also using AI with its corporate clients.

We work with a lot of larger brands, like, for instance, Ben & Jerrys ice cream, he said. We are officially doing the Ben & Jerrys podcast. Were proud to be able to test these AI and machine-learning approaches out with them and others.

Going back to his beginnings in podcasting, Frasier noted that, when he started out, he had nothing. He didnt even have production equipment or content, he only knew that he wanted to tell stories. But Lyneir Richardson, executive director of the Center for Urban Entrepreneurship & Economic Development at Rutgers Business School, gave him a chance when he pitched a podcast that would feature interviews with venture capitalists. Frasier admits he didnt know much about what he was doing, but the podcast, called VC Cheat Sheet, turned out well.

It really put the name of the Center for Urban Entrepreneurship out there, and it was a very successful podcast for Rutgers, he said.

After that, he wound up getting some professional equipment and pitching Jeff Scott, then New Jersey Devils and Prudential Center vice president of community investment and grassroots, also a Newark-based executive.

It turned out great, he said. We recorded the podcast live at an event at the Grammy Museum at the Prudential Center.

For Frasier, everything comes back to community.

(Being successful) really came from those relationships that we built for years doing events and doing a lot of local tech community stuff, he said. I was able to leverage a lot of that goodwill that built up to get our first two customers. And, if we didnt have those first two customers, I dont think we would be where we are today.

Reach ABF Creative at: abfc.co.

Visit link:
Tech Trends: Newark-based multicultural podcast producer, ABF Creative, leverages machine learning and AI - ROI-NJ.com

Scientists Mapped Every Large Solar Plant on the Planet Using Satellites and Machine Learning – Singularity Hub

An astonishing 82 percent decrease in the cost of solar photovoltaic (PV) energy since 2010 has given the world a fighting chance to build a zero-emissions energy system which might be less costly than the fossil-fueled system it replaces. The International Energy Agency projects that PV solar generating capacity must grow ten-fold by 2040 if we are to meet the dual tasks of alleviating global poverty and constraining warming to well below 2C.

Critical challenges remain. Solar is intermittent, since sunshine varies during the day and across seasons, so energy must be stored for when the sun doesnt shine. Policy must also be designed to ensure solar energy reaches the furthest corners of the world and places where it is most needed. And there will be inevitable tradeoffs between solar energy and other uses for the same land, including conservation and biodiversity, agriculture and food systems, and community and indigenous uses.

Colleagues and I have now published in the journal Nature the first global inventory of large solar energy generating facilities. Large in this case refers to facilities that generate at least 10 kilowatts when the sun is at its peak (a typical small residential rooftop installation has a capacity of around 5 kilowatts).

We built a machine learning system to detect these facilities in satellite imagery and then deployed the system on over 550 terabytes of imagery using several human lifetimes of computing.

We searched almost half of Earths land surface area, filtering out remote areas far from human populations. In total we detected 68,661 solar facilities. Using the area of these facilities, and controlling for the uncertainty in our machine learning system, we obtain a global estimate of 423 gigawatts of installed generating capacity at the end of 2018. This is very close to the International Renewable Energy Agencys (IRENA) estimate of 420 GW for the same period.

Our study shows solar PV generating capacity grew by a remarkable 81 percent between 2016 and 2018, the period for which we had timestamped imagery. Growth was led particularly by increases in India (184 percent), Turkey (143 percent), China (120 percent) and Japan (119 percent).

Facilities ranged in size from sprawling gigawatt-scale desert installations in Chile, South Africa, India, and north-west China, through to commercial and industrial rooftop installations in California and Germany, rural patchwork installations in North Carolina and England, and urban patchwork installations in South Korea and Japan.

Country-level aggregates of our dataset are very close to IRENAs country-level statistics, which are collected from questionnaires, country officials, and industry associations. Compared to other facility-level datasets, we address some critical coverage gaps, particularly in developing countries, where the diffusion of solar PV is critical for expanding electricity access while reducing greenhouse gas emissions. In developed and developing countries alike, our data provides a common benchmark unbiased by reporting from companies or governments.

Geospatially-localized data is of critical importance to the energy transition. Grid operators and electricity market participants need to know precisely where solar facilities are in order to know accurately the amount of energy they are generating or will generate. Emerging in-situ or remote systems are able to use location data to predict increased or decreased generation caused by, for example, passing clouds or changes in the weather.

This increased predictability allows solar to reach higher proportions of the energy mix. As solar becomes more predictable, grid operators will need to keep fewer fossil fuel power plants in reserve, and fewer penalties for over- or under-generation will mean more marginal projects will be unlocked.

Using the back catalogue of satellite imagery, we were able to estimate installation dates for 30 percent of the facilities. Data like this allows us to study the precise conditions which are leading to the diffusion of solar energy, and will help governments better design subsidies to encourage faster growth.

Knowing where a facility is also allows us to study the unintended consequences of the growth of solar energy generation. In our study, we found that solar power plants are most often in agricultural areas, followed by grasslands and deserts.

This highlights the need to carefully consider the impact that a ten-fold expansion of solar PV generating capacity will have in the coming decades on food systems, biodiversity, and lands used by vulnerable populations. Policymakers can provide incentives to instead install solar generation on rooftops which cause less land-use competition, or other renewable energy options.

The github, code, and data repositories from this research have been made available to facilitate more research of this type and to kickstart the creation of a complete, open, and current dataset of the planets solar energy facilities.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Grgory ROOSE / Pixabay

Read more here:
Scientists Mapped Every Large Solar Plant on the Planet Using Satellites and Machine Learning - Singularity Hub

Top Machine Learning Tools Used By Experts In 2021 – Analytics Insight

The amount of data generated on a day-to-day basis is humungous so much so that the term given to identify such a large volume of data is coined as big data. Big data is usually raw and cannot be used to meet business objectives. Thus, transforming this data into a form that is easy to understand is important. This is exactly where machine learning comes into play. With machine learning in place, it is possible to understand the customer demands, their behavioral pattern and a lot more thereby enabling the business to meet its objectives. For this very purpose, companies and experts rely on certain machine learning tools. Here is our find of top machine learning tools used by experts in 2021. Have a look!

Keras is a free and open-source Python library popularly used for machine learning. Designed by Google engineer Franois Chollet, Keras acts as an interface for the TensorFlow library. In addition to being user-friendly, this machine learning tool is quick, easy and runs on both CPU and GPU. Keras is written in Python language and functions as an API for neural networks.

Yet another widely used machine learning tool across the globe is KNIME. It is easy to learn, free and ideal for data reporting, analytics, and integration platforms. One of the many remarkable features of this machine learning tool is that it can integrate codes of programming languages like Java, JavaScript, R, Python, C, and C++.

WEKA, designed at the University of Waikato, in New Zealand is a tried-and-tested solution for open-source machine learning. This machine learning tool is considered ideal for research, teaching I models, and creating powerful applications. This is written in Java and supports platforms like Linux, Mac OS, Windows. It is extensively used for teaching and research purposes and also for industrial applications for the sole reason that the algorithms employed are easy to understand.

Shogun, an open-source and free-to-use software library for machine learning is quite easily accessible for businesses of all backgrounds and sizes. Shoguns solution is entirely in C++. One can access it in other development languages, including R, Python, Ruby, Scala, and more. Everything from regression and classification to Hidden Markov models, this machine learning tool has got you covered.

If you are a beginner then there cannot be a better machine learning tool to start with other than Rapid Miner. It is because of the fact that it doesnt require any programming skills in the first place. This machine learning tool is considered to be ideal for text mining, data preparation, and predictive analytics. Designed for business leaders, data scientists, and forward-thinking organisations, Rapid Miner surely has grabbed attention for all the right reasons.

TensorFlow is yet another machine learning tool that has gained immense popularity in no time. This open-source framework blends both neural network models with other machine learning strategies. With its ability to run on both CPU as well as GPU, TensorFlow has managed to make it to the list of favourite machine learning tools.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

See the rest here:
Top Machine Learning Tools Used By Experts In 2021 - Analytics Insight

An Illustrative Guide to Extrapolation in Machine Learning – Analytics India Magazine

Humans excel at extrapolating in a variety of situations. For example, we can use arithmetic to solve problems with infinitely big numbers. One can question if machine learning can do the same thing and generalize to cases that are arbitrarily far apart from the training data. Extrapolation is a statistical technique for estimating values that extend beyond a particular collection of data or observations. In contrast to extrapolation, we shall explain its primary aspects in this article and attempt to connect it to machine learning. The following are the main points to be discussed in this article.

Lets start the discussion by understanding extrapolation.

Extrapolation is a sort of estimation of a variables value beyond the initial observation range based on its relationship with another variable. Extrapolation is similar to interpolation in that it generates estimates between known observations, but it is more uncertain and has a higher risk of giving meaningless results.

Extrapolation can also refer to a methods expansion, presuming that similar methods are applicable. Extrapolation is a term that refers to the process of projecting, extending, or expanding known experience into an unknown or previously unexperienced area in order to arrive at a (typically speculative) understanding of the unknown.

Extrapolation is a method of estimating a value outside of a defined range. Lets take a general example. If youre a parent, you may recall your youngster calling any small four-legged critter a cat because their first classifier employed only a few traits. They were also able to correctly identify dogs after being trained to extrapolate and factor in additional attributes.

Even for humans, extrapolation is challenging. Our models are interpolation machines, no matter how clever they are. Even the most complicated neural networks may fail when asked to extrapolate beyond the limitations of their training data.

Machine learning has traditionally only been able to interpolate data, that is, generate predictions about a scenario that is between two other, known situations. Because machine learning only learns to model existing data locally as accurately as possible, it cannot extrapolate that is, it cannot make predictions about scenarios outside of the known conditions. It takes time and resources to collect enough data for good interpolation, and it necessitates data from extreme or dangerous settings.

When We use data in regression problems to generalize a function that translates a set of input variables X to a set of output variables y. A y value can be predicted for any combination of input variables using this function mapping. When the input variables are located between the training data, this procedure is referred to as interpolation; however, if the point of estimation is located outside of this region, it is referred to as extrapolation.

The grey and white sections in the univariate example in Fig above show the extrapolation and interpolation regimes, respectively. The black lines reflect a selection of polynomial models that were used to make predictions within and outside of the training data set.

The models are well limited in the interpolation regime, causing them to collapse in a tiny region. However, outside of the domain, the models diverge, producing radically disparate predictions. The absence of information given to the model during training that would confine the model to predictions with a smaller variance is the cause of this large divergence of predictions (despite being the same model with slightly different hyperparameters and trained on the same set of data).

This is the risk of extrapolation: model predictions outside of the training domain are particularly sensitive to training data and model parameters, resulting in unpredictable behaviour unless the model formulation contains implicit or explicit assumptions.

In the absence of training data, most learners do not specify the behaviour of their final functions. Theyre usually made to be universal approximators or as close as possible with few modelling constraints. As a result, in places where there is little or no data, the function has very little previous control. As a result, we cant regulate the behaviour of the prediction function at extrapolation points in most machine learning scenarios, and we cant tell when this is a problem.

Extrapolation should not be a problem in theory; in a static system with a representative training sample, the chances of having to anticipate a point of extrapolation are essentially zero. However, most training sets are not representative, and they are not derived from static systems, therefore extrapolation may be required.

Even empirical data derived from a product distribution can appear to have a strong correlation pattern when scaled up to high dimensions. Because functions are learned based on an empirical sample, they may be able to extrapolate effectively even in theoretically dense locations.

Extrapolation works with linear and other types of regression to some extent, but not with decision trees or random forests. In the Decision Tree and Random Forest, the input is sorted and filtered down into leaf nodes that have no direct relationship to other leaf nodes in the tree or forest. This means that, while the random forest is great at sorting data, the results cant be extrapolated because it doesnt know how to classify data outside of the domain.

A good decision on which extrapolation method to use is based on a prior understanding of the process that produced the existing data points. Some experts have recommended using causal factors to assess extrapolation approaches. We will see a few of them. These are pure mathematical methods one should relate to your problem properly.

Linear extrapolation is the process of drawing a tangent line from the known datas end and extending it beyond that point. Only use linear extrapolation to extend the graph of an essentially linear function or not too much beyond the existing data to get good results. Linear extrapolation produces the function if the two data points closest to the point x* to be extrapolated are (xk-1,yk-1) and (xk,yk).

A polynomial curve can be built using all of the known data or just a small portion of it (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The curve that results can then be extended beyond the available data. The most common way of polynomial extrapolation is to use Lagrange interpolation or Newtons method of finite differences to generate a Newton series that matches the data. The data can be extrapolated using the obtained polynomial.

Five spots near the end of the given data can be used to make a conic section. If the conic section is an ellipse or a circle, it will loop back and rejoin itself when extrapolated. A parabola or hyperbola that has been extrapolated will not rejoin itself, but it may curve back toward the X-axis. A conic sections template (on paper) or a computer could be used for this form of extrapolation.

Further, we will see the simple python implementation of linear extrapolation.

The technique is beneficial when the linear function is known. Its done by drawing a tangent and extending it beyond the limit. When the projected point is close to the rest of the points, linear extrapolation delivers a decent result.

Extrapolation is a helpful technique, but it must be used in conjunction with the appropriate model for describing the data, and it has limitations after you leave the training area. Its applications include predicting in situations where you have continuous data, such as time, speed, and so on. Prediction is notoriously imprecise, and the accuracy falls as the distance from the learned area grows. In situations where extrapolation is required, the model should be updated and retrained to lower the margin of error. Through this article, we have understood extrapolation and its interpolation mathematically and related them with the ML, and seen their effect on the ML system. We have also seen particularly where it fails, and methods that can be used.

Read more from the original source:
An Illustrative Guide to Extrapolation in Machine Learning - Analytics India Magazine

Origami Therapeutics developing treatments for neurodegenerative diseases with ML and computational chemistry – MedCity News

Beth Hoffman, Origami Therapeutics CEO, talks abouthow the company enlists machine learning and computational chemistry to develop ways to treat neurodegenerative diseases, in response to emailed questions.

Why did you start Origami Therapeutics?

I started Origami Therapeutics because I saw an opportunity to develop a different approach to treating neurodegenerative diseases by using protein correctors and degraders. I was at Vertex Pharmaceuticals for more than 7 years, leading their drug discovery efforts, which ultimately led to the development of the current blockbuster cystic fibrosis drugs, Orkambi (lumacaftor/ivacaftor) and Kalydeco (ivacaftor).

Prior to Vertex, I was scientific executive director at Amgen, where I built and guided their neuropsychiatric drug discovery group, a new disease area. I was also elected to the Scientific Advisory Board for Amgen Ventures to evaluate Series A and Series B investment opportunities. I was also previously Head of Neuroscience at Eli Lilly, where I established a new research group and oversaw strategic planning and execution on our novel targets portfolio.

By combining my expertise gained in neuroscience drug development at Amgen and Eli Lilly with my experience at Vertex developing protein correctors for cystic fibrosis, Origami was founded. Leveraging my experience in discovering transformational therapies for cystic fibrosis that modulate CFTR conformation, our focus is to treat neurodegeneration by directly modulating pathogenic proteins.

Beth Hoffman

What does the company do?

Based in San Diego, Origami is developing a pipeline of precision protein degraders and correctors to treat neurodegenerative diseases caused by toxic protein misfolding, beginning with Huntingtons Disease (HD). We are discovering compounds using our precision technology platform Oricision focused on high-value targets, with the potential to deliver more potent therapies and to address the >80% of proteins that evade inhibition and have been undruggable by traditional approaches.

We are also pioneering the adoption of spheroid brain cell cultures, a 3-D cell culture system of multiple brain cell types to create patient-derived cells models of neurological disease. Origami is using machine learning and computational chemistry to optimize small molecules that prevent mutant huntingtin (mHTT) pathology in human neurons.

What sets your company apart?

Most of the industrys current programs in protein degradation are in oncology. At Origami, we are developing a novel pipeline of small molecule, disease-modifying protein degraders and corrector therapeutics for neurodegenerative diseases.

Origamis discovery platform, Oricision, enables the discovery and development of both protein degraders and conformation correctors, allowing us to match the best drug to treat each disease using AI and machine learning driven, patient-derived disease models including brain organoids (or spheroids) to enhance translation to the clinic.

In oncology, companies are targeting a protein that, when eliminated, causes the cancer cell to die. For neurological diseases, we dont want brain cells to die, so we must find a means to reduce the toxic protein in a way that protects and saves a patients nerve cells, preserving healthy, thriving cells and preventing dysfunction.

Origami is taking a fundamentally different approach to protein degradation, a more elegant approach that spares functional proteins while selectively eliminating toxic misfolded forms. Our competitive advantage is were developing a novel pipeline of small molecules that target the underlying cause of disease, beginning with mutant huntingtin protein (mHTT), the only validated target for HD.

Oral delivery enables non-invasive treatment throughout the body, and early peripheral blood-based biomarkers guide timing for brain biomarker assessment. Our lead candidate ORI-113 targets toxic misfolded mHTT for elimination via natural degradation pathways with the goal of sparing HTT function. Conformation correctors prevent/repair protein misfolding, eliminating toxic effects while preserving HTT function.

What specific need are you addressing in healthcare/ life sciences?

Since many neurodegenerative diseases are caused by protein misfolding, there is a significant opportunity to develop drugs that address the underlying cause of the disease using a mechanism that could halt, potentially reverse, and hopefully prevent the disease entirely.

We believe that neuroscience investment is seeing a huge renaissance moment right now, and there is a huge opportunity with increased pharma interest and growth in this space, in areas of significant unmet medical need such as in HD, Alzheimers disease, Parkinsons disease, and other disorders.

At what stage of development is your lead product?

Our lead compound ORI-113 is in pre-clinical development for Huntingtons disease, a huge unmet medical need for patients, where no FDA approved drugs slow, halt, prevent or reverse disease progression. The currently approved drugs only partially treat motor symptoms of HD, with significant side-effects.

We selected HD as our lead indication since it is a monogenic, dominantly inherited fatal neurodegenerative disease characterized by a triad of symptoms: motor, psychiatric and cognitive impairment. Typically diagnosed between 30 and 50 years of age, HD is a systemic disease with dysfunction observed throughout the body, including immune, cardiovascular, digestive and endocrine systems as well as skeletal muscle.

There are large HD patient registries in North America and Europe, so we can select patients at a very precise stage of the disease. In addition to being able to select the right patients for our future studies, we have diagnostics to evaluate how well they respond to the treatment and we can relatively quickly know if our drug is working.

HD is an orphan disease with 71,000 symptomatic patients in the U.S. & Europe and 250,000 individuals at risk for inheriting the gene that causes HD, 50% of whom are anticipated to be gene-positive. The worldwide patient population is estimated at 185,000.

Do you have clinical validation for your product?

Our lead candidate ORI-113 for HD is currently in preclinical development.

What are some milestones you have achieved?

We have conducted a proprietary high throughput screen (HTS), hit expansion/ hit-to-lead, our initial mechanism of action (MoA) studies which show that our molecules suppress mHTT toxicities. Weve also secured a broad IP portfolio. Our team is based in Biolabs San Diego, where we have a wet lab and have built a scientific and research team.

Whats next for the company this year?

We are currently in the process of raising our seed financing round. The seed funding will advance our lead protein degrader into lead optimization for HD and additional programs. Currently, we are selecting the optimal protein degrader to advance into pre-clinical studies for HD and initiating programs for additional indications.

We are evaluating several molecules and select the best one with the aim of choosing a clinical candidate compound in 12-18 months. Besides the degrader molecules, we expect to advance conformation correctors, which restore protein function by fixing the misfolded structures.

Photo: Andrzej Wojcicki, Getty Images

Read more from the original source:
Origami Therapeutics developing treatments for neurodegenerative diseases with ML and computational chemistry - MedCity News

Machine Learning Trends to Watch 2021 – Datamation

Machine learning (ML), a commonly used type of artificial intelligence (AI), is one of the fastest-growing fields in technology.

Especially as the workplace, products, and service expectations are changing through digital transformations, more companies are leaning into machine learning solutions to optimize, automate, and simplify their operations.

So what does ML technology look like today and where is it heading in the future? Read on to learn about some of the top trends in machine learning today.

More on the ML market: Machine Learning Market

Many businesses are investing significant time and resources into ML development because they recognize its potential for automation.

When an ML model is designed with business processes in mind, it can automate a variety of business functions across marketing, sales, HR, and even network security. MLOps and AutoML are two of the most popular applications of machine learning today, giving teams the ability to automate tasks and bring DevOps principles to machine learning use cases.

Read Maloney, SVP of marketing at H2O.ai, a top AI and hybrid cloud company, believes that both MLOps and AutoML strategies eliminate several traditional business blockers.

Scaling AI for the enterprise requires a new set of tools and skills designed for modern infrastructure and collaboration, Maloney said. Teams using manual deployment and management find they are quickly strapped for resources and after getting a few models into production, cannot scale beyond that.

Machine learning operations (MLOps), is the set of practices and technology that enable organizations to scale and manage AI in production, essentially bringing the development practice of DevOps to machine learning. MLOps helps data science and IT teams collaborate and empowers IT teams to lead production machine learning projects, without having to rely on data science expertise.

AutoML solves a few of the biggest blockers to ML adoption, including faster time to ROI and more quickly and easily developing models. AutoML automates key parts of the data science workflow to increase productivity, without compromising model quality, interpretability, and performance.

With AutoML, you can automate algorithm selection, feature generation, hyper-parameter tuning, iterative modeling, and model assessment. By automating repetitive tasks in the workflow, data scientists can focus on the data and the business problems they are trying to solve and speed time from experiment to impact.

Automation through ML is desirable in theory, but in practice, its sometimes difficult for business leaders to envision how ML tools can optimize their business operations.

Amaresh Tripathy, SVP and global business leader at Genpact, a digital transformation and professional services firm, offered some common examples of how MLOps and MLOps-as-a-service help businesses in various industries.

One [MLOps] example is using AI models to efficiently direct sales teams to identify the next best customer, Tripathy said. Another is optimizing pricing and revenue management systems using dynamic demand forecasting.

AI and automation in the workforce: Artificial Intelligence and Automation

Machine learning is still considered a niche and complex technology to develop, but a growing segment of tech professionals are working to democratize the field, particularly by making ML solutions more widely accessible.

Jean-Francois Gagne, head of AI product and strategy at ServiceNow, a workflow management software company, believes that ML democratization involves creating easier access to develop and deploy ML models as well as giving more people access to useful ML training data.

Good training data is often scarce, Gagne said. Low-data learning techniques are helping in enterprise AI use cases, where customers want to adapt pre-trained out-the-box models to their unique business context. In most cases, their own data sets are not that big, but methods such as transfer learning, self-supervised learning, and few-shot learning help minimize the amount of labeled training data needed for an application.

ML democratization is also about creating tools that consider the backgrounds and use cases of a more diverse range of users.

Brian Gilmore, director of IoT product management at InfluxData, a database solutions company, believes that more users and developers are starting to recognize the benefit of a diverse team for developing ML solutions.

Ignoring the technical for a moment, we must focus on the human aspects of AI as well, Gilmore said. There seems to be a trend building around the democratization of the ML ecosystem, bringing more diverse stakeholders to the table no matter where in the value chain.

Bias is probably the single greatest obstacle to ML efficacy, and leading companies are learning to combat bias and build better applications by embracing diversity and inclusion (D&I).

ML needs additional variety in training data, for sure. Still, we should also consider the positive impact of D&I on the teams that design, build, label, and deliver the ML-driven applications this can genuinely differentiate ML products.

More on data democratization: Data Democratization Trends

ML developers are increasingly creating their models in containers.

When a machine learning product is developed and deployed within a containerized environment, users can ensure that its operational power is not negatively impacted by other programs running on the server. More importantly, ML becomes more scalable through containerization, as the packaged model makes it possible to migrate and adjust ML workloads over time.

Ali Siddiqui, chief product officer at BMC, a SaaS company with a variety of ITOps solutions, believes that containerized development of machine learning is the best way forward, particularly in the case of digital enterprises incorporating autonomous operations.

Its trending to use machine learning workloads in containers, Siddiqui said. Containers allow autonomous digital enterprises to have isolation, portability, unlimited scalability, dynamic behavior, and rapid change through advanced enterprise DevOps processes.

ML workloads are typically spiky and require high scalability and in some cases, real-time stream processing. For instance, when you take a look at ML projects, they typically have two phases: algorithm creation and algorithm execution. The first involves a lot of data and data processing. The second typically requires a lot of compute power in production. Both can benefit from container deployment to ensure scalability and availability.

More on containerization: Containers are Shaping IoT Development

In another trending effort toward ML democratization, a number of ML developers have perfected their models over time and found ways to create template-like versions, available to a wider pool of users via API and other integrations.

Bali D.R., SVP at Infosys, a global digital services and consulting firm, believes that prepackaged ML tools, particularly via APIs and digital storefronts, are some of the most common and useful applications of machine learning today:

API-fication of ML models is another key trend we are seeing, whether it is GPT3, CODEX, or even Hugging Face, where they train and deploy state-of-the-art NLP models and make them available as web APIs or Python packages for inferencing, DR said. [Theres also] AI stores with pre-trained models exposed via APIs, which provide a drag-and-drop option for AI development across enterprises.

Also read: Artificial Intelligence vs. Machine Learning

Machine learning models can only improve their functionality over time if they are consistently fed new data in intervals. Since so many ML models rely on timeline-based updates, a number of ML solutions are using a time series approach to improve the models understanding of the what, when, and why behind different data sets.

Read Maloney of H2O.ai explained why time series solutions are necessary for truly predictive ML:

On a long enough horizon, all problems eventually become time series problems, Maloney said. ML is a phenomenal method for predicting events in real-time, and as we observe these predictions over time, we need more and more time series solutions.

Every business needs to make predictions, whether forecasting sales, estimating product demand, or predicting future inventory levels. In all cases, data is necessary as well as specific techniques and tools to account for time.

Selecting the right machine learning support for your business: Top Machine Learning Companies

Continue reading here:
Machine Learning Trends to Watch 2021 - Datamation

The Pixel 6s Tensor processor promises to put Googles machine learning smarts in your pocket – The Verge

Googles Pixel 6 and Pixel 6 Pro are officially here, and with them, the debut of Googles new Tensor chip. Google has finally revealed more information on what the new SoC can actually do, for the fastest Pixel phones ever.

The initial reveal of the Pixel 6 and the Tensor chip was largely centered on its AI-focused TPU (Tensor processing unit) and how the custom hardware would help Google differentiate itself from competitors.

Thats still the big focus of Googles announcement today: the company calls Tensor a milestone for machine learning that was co-designed alongside Google Research to allow it to easily translate AI and machine learning advances into actual consumer products. For example, Google says that the Tensor chip will have the most accurate Automatic Speech Recognition (ASR) that its offered, for both quick Google Assistant queries and longer audio tasks like live captions or the Recorder app.

Tensor also enables new Pixel 6 features like Motion Mode, more accurate face detection, and live translations that can convert text to a different language as quickly as you can type it. Google also says that the Tensor chip will handle dedicated machine learning tasks with far more power efficiency than previous Pixel phones.

But theres a lot more to a smartphone chip than its AI chops, and with the reveal of the Pixel 6, we finally have more details on the rest of the chip, including the CPU, GPU, modem, and the major components that make Tensor tick.

As rumored, the Tensor chip uses a unique combination of CPU cores. Theres the custom TPU (Tensor Processing Unit) for AI, two high-power Cortex-X1 cores, two midrange (rumored to be older Cortex-A76 cores), and then four low power efficiency cores (likely Arms usual Cortex-55 designs). Graphics are offered by a 20-core GPU, in addition to a context hub that powers ambient experiences like the always-on display, a private computer core, and a new Titan M2 chip for security. Theres also a dedicated image processing core to help with the Pixels hallmark photography.

Its not entirely clear why Google would choose to use the Cortex-A76 cores instead of the more modern Cortex-A78 (which are both more powerful and more power efficient). But it is worth noting that the Pixel 5s Snapdragon 765G also used two Cortex-A76 cores for its main CPU cores, so its possible Google is sticking with what it knows.

The new phones should still be the fastest Pixel phones yet, with Google promising 80 percent faster CPU performance compared to the Pixel 5, and 370 percent faster GPU performance.

The real question, though, is how the Pixel 6 and its Tensor chip hold up compared to other traditional Android flagships. Googles CPU configuration is a unique one, compared to the more traditional four high-performance and four efficiency cores used by major Qualcomm and Samsung chips.

In theory, Google is offering double the number of X1 performance cores the most powerful Arm design than the Snapdragon 888 or Exynos 2100, which both use a single Cortex-X1, three Cortex-A78, and four Cortex-A55 cores. But Google is also swapping out the two high-end cores with midrange ones, which may help battery life and performance... or may just result in a weaker overall device. Well find out soon once weve had the chance to put the Pixel 6 and Tensor through their paces.

Read more here:
The Pixel 6s Tensor processor promises to put Googles machine learning smarts in your pocket - The Verge

Machine learning study identifies facial features that are central to first impressions – PsyPost

A study published in Social Psychological and Personality Science presents evidence that people make judgments about strangers personalities based on how closely their resting faces resemble emotional expressions. It was found that among seven classes of facial characteristics, resemblance to emotional expressions was the strongest predictor of impressions of both trustworthiness and dominance.

It has long been demonstrated that people form rapid impressions of others based on their physical appearances. Such quick judgments can have strong repercussions for example, when juries are forming impressions of the accused during criminal trials or when hiring managers are screening potential candidates.

One thing I find fascinating about first impressions is how quickly and intuitively they come to mind. For example, I might see a stranger on the train and immediately get the feeling that they cannot be trusted. I want to understand where these intuitions come from. What is it about a persons appearance that makes them appear untrustworthy, intelligent, or dominant to us? said study author Bastian Jaeger, an assistant professor at the Vrije Universiteit Amsterdam.

While many studies have identified specific facial characteristics that are associated with personality impressions, Jaeger and his colleague Alex L. Jones note that this type of research comes with its challenges. Since many facial features are correlated, it is tricky to identify the unique effects of a given characteristic. For example, if a face is manipulated to look more like it is smiling, these adjustments will also influence the babyfacedness of the face. For this reason, Jaeger and Jones set out to examine the relative predictive value of a given facial characteristic for personality impressions, by examining a wide range of facial features at once.

The researchers analyzed a dataset from the Chicago Face Database, which included 597 faces of individuals maintaining a neutral expression in front of a plain background. The dataset had previously been presented to a sample of 1,087 raters who each rated a subset of 10 faces on a wide range of characteristics. These characteristics included attractiveness, unusualness, babyfacedness, dominance, and trustworthiness of the face. The sample also rated the extent that faces resembled six emotional expressions happiness, sadness, anger, disgust, fear, and surprise.

In total, the database included information on 28 facial features which the researchers divided into seven categories: demographics, morphological features, facial width-to-height ratio (fWHR), perceived attractiveness, perceived unusualness, perceived babyfacedness, and emotion resemblance.

Using machine learning, Jaeger and Jones tested the predictive value of each of these classes of facial features for impressions of trustworthiness and dominance. It was found that resemblance to emotional expressions was the best predictor for perceptions of both trustworthiness and dominance. Emotion resemblance also explained the most variance in perceptions of trustworthiness and dominance out of all seven classes.

Next, using regression analysis, the researchers examined the relative predictive value of each of the 28 facial features. Here, they found that resemblance to a happy expression was the strongest predictor of trustworthiness. Attractiveness and being Asian were also substantial positive predictors, and resemblance to an angry expression was a fairly strong negative predictor. For perceptions of dominance, resemblance to an angry expression was the strongest positive predictor, and being female was the strongest negative predictor. Contrary to previous findings, fWHR was not a strong predictor of either trustworthiness or dominance perceptions.

The studys authors say this pattern of findings is in line with a phenomenon called emotion overgeneralization, which posits that people are especially sensitive to reading emotions in other peoples faces since emotions convey highly relevant social information. Because of this oversensitivity, people end up detecting emotions even in neutral faces that structurally resemble emotional expressions. This information is then used to infer personality characteristics from the face, such as trustworthiness.

We shouldnt be too confident in our first impressions, Jaeger told PsyPost. They might come to mind easily and effortlessly, but not because we are so good at judging others. Rather, it seems like our oversensitive emotion detection system makes us see things in others faces. Even when a person is not sending any emotional signals, we might detect a smile, just because the corners of their mouth are slightly tilted upwards. And because of our tendency to overgeneralize from emotional states to psychological traits, we not only think that they are happy right now, but that they are happy, outgoing, and trustworthy in general.

Notably, the results imply that there are additional features that relate to impression formation that the study did not test for. Emotion resemblances explained 53% and 42% of the variance in trustworthiness and dominance perceptions, Jaeger and Jones report. Even the optimized Elastic Net models explained around 68% of the variance, indicating there are other important factors contributing to personality impressions. Future studies should attempt to uncover more predictors and shed additional light on the relative importance of specific facial features.

Our findings are based on relatively large and demographically diverse samples of raters and targets, but they were all from the United States, Jaeger noted. Its important to test the generalizability of our results. We find that first impressions are largely based on how much a persons facial features resemble a smile or a frown, but is that also true for people in China, Chile, or Chad?

The study, Which Facial Features Are Central in Impression Formation?, was authored by Bastian Jaeger and Alex L. Jones.

Read the original:
Machine learning study identifies facial features that are central to first impressions - PsyPost