Psychologists use machine learning algorithm to pinpoint top predictors of cheating in a relationship – PsyPost

According to a study published in the Journal of Sex Research, relationship characteristics like relationship satisfaction, relationship length, and romantic love are among the top predictors of cheating within a relationship. The researchers used a machine learning algorithm to pinpoint the top predictors of infidelity among over 95 different variables.

While a host of studies have investigated predictors of infidelity, the research has largely revealed mixed and often contradictory findings. Study authors Laura M. Vowels and her colleagues aimed to improve on these inconsistencies by using machine learning models. This approach would allow them to compare the relative predictability of various relationship factors within the same analyses.

The research topic was actually suggested by my co-author, Dr. Kristen Mark, who was interested in understanding predictors of infidelity better. She has previously published several articles on infidelity and is interested in the topic, explained Vowels, a principal researcher forBlueheart.ioand postdoctoral researcher at the University of Lausanne.

Vowels and her team pooled data from two different studies. The first data set came from a study of 891 adults, the majority of whom were married or cohabitating with a partner (63%). Around 54% of the sample identified as straight, 21% identified as bisexual, 11% identified as gay, and 7% identified as lesbian. A second data set was collected from both members of 202 mixed-sex couples who had been together for an average of 9 years, the majority of whom were straight (93%).

Data from the two studies included many of the same variables such as demographic measures like age, race, sexual orientation, and education, in addition to assessments of participants sexual behavior, sexual satisfaction, relationship satisfaction, and attachment styles. Both studies also included a measure of in-person infidelity (having interacted sexually with someone other than ones current partner) and online infidelity (having interacted sexually with someone other than ones current partner on the internet).

Using machine learning techniques, the researchers analyzed the data sets together first for all respondents and then separately for men and women. They then identified the top ten predictors for in-person cheating and for online cheating. Across both samples and among both men and women, higher relationship satisfaction predicted a lower likelihood of in-person cheating. By contrast, higher desire for solo sexual activity, higher desire for sex with ones partner, and being in a longer relationship predicted a higher likelihood of in-person cheating. In the second data set only, greater sexual satisfaction and romantic love predicted a lower likelihood of in-person infidelity.

When it came to online cheating, greater sexual desire and being in a longer relationship predicted a higher likelihood of cheating. Never having had anal sex with ones current partner decreased the likelihood of cheating online a finding the authors say likely reflects more conservative attitudes toward sexuality. In the second data set only, higher relationship and sexual satisfaction also predicted a lower likelihood of cheating.

Overall, I would say that there isnt one specific thing that would predict infidelity. However, relationship related variables were more predictive of infidelity compared to individual variables like personality. Therefore, preventing infidelity might be more successful by maintaining a good and healthy relationship rather than thinking about specific characteristics of the person, Vowels told PsyPost.

Consistent with previous studies, relationship characteristics like romantic love and sexual satisfaction surfaced as top predictors of infidelity across both samples. The researchers say this suggests that the strongest predictors for cheating are often found within the relationship, noting that, addressing relationship issues may buffer against the likelihood of one partner going out of the relationship to seek fulfillment.

These results suggest that intervening in relationships when difficulties first arise may be the best way to prevent future infidelity. Furthermore, because sexual desire was one of the most robust predictors of infidelity, discussing sexual needs and desires and finding ways to meet those needs in relationships may also decrease the risk of infidelity, the authors report.

The researchers emphasize that their analysis involved predicting past experiences of infidelity from an array of present-day assessments. They say that this design may have affected their findings, since couples who had previously dealt with cheating within the relationship may have worked through it by the time they completed the survey.

The study was exploratory in nature and didnt include all the potential predictors, Vowels explained. It also predicted infidelity in the past rather than current or future infidelity, so there are certain elements like relationship satisfaction that might have changed since the infidelity occurred. I think in the future it would be useful to look into other variables and also look at recent infidelity because that would make the measure of infidelity more reliable.

The study, Is Infidelity Predictable? Using Explainable Machine Learning to Identify the Most Important Predictors of Infidelity, was authored by Laura M. Vowels, Matthew J. Vowels, and Kristen P. Mark.

Visit link:
Psychologists use machine learning algorithm to pinpoint top predictors of cheating in a relationship - PsyPost

As machine learning becomes standard in military and politics, it needs moral safeguards | TheHill – The Hill

Over the past decade, the world has experienced a technological revolution powered by machine learning (ML). Algorithms remove the decision fatigue of purchasing books and choosing music, and the work of turning on lights and driving, allowing humans to focus on activities more likely to optimize their sense of happiness. Futurists are now looking to bring ML platforms to more complex aspects of human society, specifically warfighting and policing.

Technology moralists and skeptics aside, this move is inevitable, given the need for rapid security decisions in a world with information overload. But as ML-powered weapons platforms replace human soldiers, the risk of governments misusing ML increases. Citizens of liberal democracies can and should demand that governments pushing for the creation of intelligent machines for warfighting include provisions maintaining the moral frameworks that guide their militaries.

In his popular book The End of History, Francis Fukuyama summarized debates about the ideal political system for achieving human freedom and dignity. From his perspective in the middle of 1989, months before the unexpected fall of the Berlin Wall, no other systems like democracy and capitalism could generate wealth, pull people out of poverty and defend human rights; both communism and fascism had failed, creating cruel autocracies that oppressed people. Without realizing it, Fukuyama prophesied democracys proliferation across the world. Democratization soon occurred through grassroots efforts in Asia, Eastern Europe and Latin America.

These transitions, however, wouldnt have been possible unless the military acquiesced to these reforms. In Spain and Russia, the military attempted a coup before recognizing the dominant political desire for change. China instead opted to annihilate reformers.

The idea that the military has veto power might seem incongruous to citizens of consolidated democracies. But in transitioning societies, the military often has the final say on reform due to its symbiotic relationship with the government. In contrast, consolidated democracies benefit from the logic of Clausewitzs trinity, where there is a clear division of labor between the people, the government and the military. In this model, the people elect governments to make decisions for the overall good of society while furnishing the recruits for the military tasked with executing government policy and safeguarding public liberty. The trinity, though, is premised on a human military with a moral character that flows from its origins among the people. The military can refuse orders that harm the public or represent bad policy that might lead to the creation of a dictatorship.

ML risks destabilizing the trinity by removing the human element of the armed forces and subsuming them directly into the government. Developments in ML have created new weapons platforms that rely less and less on humans, as new warfighting machines are capable of provisioning security or assassinating targets with only perfunctory human supervision. The framework of machines acting without human involvement risks creating a dystopian future where political reform will become improbable, because governments will no longer have human militaries restraining them from opening fire on reformers. These dangers are evident in China, where the government lacks compunction in deploying ML platforms to monitor and control its population while also committing genocide.

In the public domain, there is some recognition of these dangers on the misuses of ML for national security. But there hasnt been a substantive debate about how ML might shape democratic governance and reform. There isnt a nefarious reason for this. Rather its that many of those who develop ML tools have STEM backgrounds and lack an understanding of broader social issues. From the government side, leaders in agencies funding ML research often dont know how to consume ML outputs, relying instead on developers to explain what theyre seeing for them. The governments measure for success is whether it keeps society safe. Throughout this process, civilians operate as bystanders, unable to interrogate the design process for ML tools used for war.

In the short term, this is fine because there arent entire armies made of robots, but the competitive advantage offered by mechanized fighting not limited by frail human bodies will make intelligent machines essential to the future of war. Moreover, these terminators will need an entire infrastructure of satellites, sensors, and information platforms powered by ML to coordinate responses to battlefield advances and setbacks, further reducing the role of humans. This will only amplify the power governments have to oppress their societies.

The risk that democratic societies might create tools that lead to this pessimistic outcome is high. The United States is engaged in an ML arms race withChina and Russia, both of which are developing and exporting their own ML tools to help dictatorships remain in power and freeze history.

There is space for civil society to insert itself into ML, however. ML succeeds and fails based on the training data used for algorithms, and civil society can collaborate with governments to choose training data that optimizes the warfighting enterprise while balancing the need to sustain dissent and reform.

By giving machines moral safeguards, the United States can create tools that instead strengthen democracys prospects. Fukuyamas thesis is only valid in a world where humans can exert their agency and reform their governments through discussion, debate and elections. The U.S., in the course of confronting its authoritarian rivals, shouldnt create tools that hasten democracys end.

Christopher Wall is a social scientist for Giant Oak, a counterterrorism instructor for Naval Special Warfare, a lecturer on statistics for national security at Georgetown University and the co-author of the recent book, The Future of Terrorism: ISIS, al-Qaeda, and the Alt-Right. Views of the author do not necessarily reflect the views of Giant Oak.

Original post:
As machine learning becomes standard in military and politics, it needs moral safeguards | TheHill - The Hill

Tech Trends: Newark-based multicultural podcast producer, ABF Creative, leverages machine learning and AI – ROI-NJ.com

Serial entrepreneur and Newark startup activist Anthony Frasier has hitched his wagon to the skyrocketing podcasting industry. His podcast network is producing cutting-edge work that connects him to his roots and his community.

Frasier knows something about community. Actually, its been his lifes work. He began by creating an online community for Black gamers to discuss computer gaming; next, he founded the Brick City Tech Meetup, at a time when there really was no tech community in Newark; and then he organized big tech events. Now, as founder and CEO of ABF Creative, based in Newark, Frasier is reaching out to the larger multicultural community through voice.

He has found his niche in podcasting. ABF Creative won a Webby in the Diversity and Inclusion category in 2021 for producing a podcast series called African Folktales. A few years before, Frasier had been an entrepreneur in residence at Newark Venture Partners; and, during the pandemic, ABF Creative joined the NVP Labs accelerator.

The company recently graduated from the fifth cohort of the Multicultural Innovation Lab at Morgan Stanley. That startup accelerator promotes financial inclusion and provides access to capital for early-stage technology and tech-enabled companies led by women and multicultural entrepreneurs. It was the first accelerator accepted into New Jersey Economic Development Authoritys NJ Accelerate program in late 2020.

Success has come to Frasier as the podcasting industry has skyrocketed. The total market size in the U.S., as calculated by Grand View Research, based in San Francisco, is projected to be more than $14 billion in 2021. ABF Creative was an early entrant into the process.

We happened to plant our flag a lot earlier than a lot of people, and so were starting to see the benefits, Frasier told NJTechWeekly.com.

He discussed the companys data-driven approach to podcasting.

ABF Creative is partnering with Newark-based machine learning startup Veritonic, another NVP Labs company.

Said Frasier: We leverage machine learning and AI (artificial intelligence) to use predictive analysis. We basically test voices, sounds and music against the potential audience before we release the podcast. And, so, that gives us an edge, especially when it comes to making content specifically for people of color. I think its a unique approach. We are testing this approach out and, so far, so good. Our first podcast that we used this technology with was African Folktales, which is the podcast for which we just won the Webby award.

(Being successful) really came from those relationships that we built for years doing events and doing a lot of local tech community stuff. I was able to leverage a lot of that goodwill that built up to get our first two customers. And, if we didnt have those first two customers, I dont think we would be where we are today. Anthony Frasier, CEO, ABF Creative

Frasier gave NJTechWeekly.com the origin story for the African Folktales podcasts.

During COVID, kids were in the house, he said. We were thinking, What could be something unique and interesting we can do for kids who are at home? So, we decided to use African folktales. For one thing, theres an abundance of them. Then, we decided to create this world around them, with this fictional teacher called Ms. JoJo, whos like the black Ms. Frizzle, if you ever watched The Magic School Bus.

And, so, we put it out, and the reception was amazing. People fell in love with it. Parents listened to it with their children. We get emails all the time. Its part of peoples nightly rituals, and were just fascinated by the result. That just shows that machine learning and what were doing is the future of content creation.

The Veritonic technology lets ABF Creative know what works better for audience members, he said. For example, you want your podcast to have a memorable voice. Should you use the voice of an older woman or a younger woman, or an older male or a younger man?

We have to test those different assumptions out and look at the score that comes back, Frasier said. From that, were able to determine which way we want to go.

However, theres more to it than just the score, Frasier explained.

For example, maybe this voice scores high in recall value, but (it scores) low in energy, he said. So, you have to play a game of tug of war with the results that come back, but, once you find the combination that you want, its easier to choose the voice youll use.

He added that ABF Creative is hanging its hat on data-driven decisions when it comes to content creation.

We feel like this is revolutionizing the way people produce podcasts, he said. What can be a bigger waste of time and money than to go off of a hunch, on what you think would work, spending thousands, if not hundreds of thousands, of dollars to produce something and then you put it out and nobody likes it? All we want to do is be a little bit smarter, and we want to make content creation smarter.

Frasier believes that, by using machine learning and AI, ABF Creative is doing good for the listening community; and the company is backing this belief up through its own polls and research. It is also using AI with its corporate clients.

We work with a lot of larger brands, like, for instance, Ben & Jerrys ice cream, he said. We are officially doing the Ben & Jerrys podcast. Were proud to be able to test these AI and machine-learning approaches out with them and others.

Going back to his beginnings in podcasting, Frasier noted that, when he started out, he had nothing. He didnt even have production equipment or content, he only knew that he wanted to tell stories. But Lyneir Richardson, executive director of the Center for Urban Entrepreneurship & Economic Development at Rutgers Business School, gave him a chance when he pitched a podcast that would feature interviews with venture capitalists. Frasier admits he didnt know much about what he was doing, but the podcast, called VC Cheat Sheet, turned out well.

It really put the name of the Center for Urban Entrepreneurship out there, and it was a very successful podcast for Rutgers, he said.

After that, he wound up getting some professional equipment and pitching Jeff Scott, then New Jersey Devils and Prudential Center vice president of community investment and grassroots, also a Newark-based executive.

It turned out great, he said. We recorded the podcast live at an event at the Grammy Museum at the Prudential Center.

For Frasier, everything comes back to community.

(Being successful) really came from those relationships that we built for years doing events and doing a lot of local tech community stuff, he said. I was able to leverage a lot of that goodwill that built up to get our first two customers. And, if we didnt have those first two customers, I dont think we would be where we are today.

Reach ABF Creative at: abfc.co.

Visit link:
Tech Trends: Newark-based multicultural podcast producer, ABF Creative, leverages machine learning and AI - ROI-NJ.com

Scientists Mapped Every Large Solar Plant on the Planet Using Satellites and Machine Learning – Singularity Hub

An astonishing 82 percent decrease in the cost of solar photovoltaic (PV) energy since 2010 has given the world a fighting chance to build a zero-emissions energy system which might be less costly than the fossil-fueled system it replaces. The International Energy Agency projects that PV solar generating capacity must grow ten-fold by 2040 if we are to meet the dual tasks of alleviating global poverty and constraining warming to well below 2C.

Critical challenges remain. Solar is intermittent, since sunshine varies during the day and across seasons, so energy must be stored for when the sun doesnt shine. Policy must also be designed to ensure solar energy reaches the furthest corners of the world and places where it is most needed. And there will be inevitable tradeoffs between solar energy and other uses for the same land, including conservation and biodiversity, agriculture and food systems, and community and indigenous uses.

Colleagues and I have now published in the journal Nature the first global inventory of large solar energy generating facilities. Large in this case refers to facilities that generate at least 10 kilowatts when the sun is at its peak (a typical small residential rooftop installation has a capacity of around 5 kilowatts).

We built a machine learning system to detect these facilities in satellite imagery and then deployed the system on over 550 terabytes of imagery using several human lifetimes of computing.

We searched almost half of Earths land surface area, filtering out remote areas far from human populations. In total we detected 68,661 solar facilities. Using the area of these facilities, and controlling for the uncertainty in our machine learning system, we obtain a global estimate of 423 gigawatts of installed generating capacity at the end of 2018. This is very close to the International Renewable Energy Agencys (IRENA) estimate of 420 GW for the same period.

Our study shows solar PV generating capacity grew by a remarkable 81 percent between 2016 and 2018, the period for which we had timestamped imagery. Growth was led particularly by increases in India (184 percent), Turkey (143 percent), China (120 percent) and Japan (119 percent).

Facilities ranged in size from sprawling gigawatt-scale desert installations in Chile, South Africa, India, and north-west China, through to commercial and industrial rooftop installations in California and Germany, rural patchwork installations in North Carolina and England, and urban patchwork installations in South Korea and Japan.

Country-level aggregates of our dataset are very close to IRENAs country-level statistics, which are collected from questionnaires, country officials, and industry associations. Compared to other facility-level datasets, we address some critical coverage gaps, particularly in developing countries, where the diffusion of solar PV is critical for expanding electricity access while reducing greenhouse gas emissions. In developed and developing countries alike, our data provides a common benchmark unbiased by reporting from companies or governments.

Geospatially-localized data is of critical importance to the energy transition. Grid operators and electricity market participants need to know precisely where solar facilities are in order to know accurately the amount of energy they are generating or will generate. Emerging in-situ or remote systems are able to use location data to predict increased or decreased generation caused by, for example, passing clouds or changes in the weather.

This increased predictability allows solar to reach higher proportions of the energy mix. As solar becomes more predictable, grid operators will need to keep fewer fossil fuel power plants in reserve, and fewer penalties for over- or under-generation will mean more marginal projects will be unlocked.

Using the back catalogue of satellite imagery, we were able to estimate installation dates for 30 percent of the facilities. Data like this allows us to study the precise conditions which are leading to the diffusion of solar energy, and will help governments better design subsidies to encourage faster growth.

Knowing where a facility is also allows us to study the unintended consequences of the growth of solar energy generation. In our study, we found that solar power plants are most often in agricultural areas, followed by grasslands and deserts.

This highlights the need to carefully consider the impact that a ten-fold expansion of solar PV generating capacity will have in the coming decades on food systems, biodiversity, and lands used by vulnerable populations. Policymakers can provide incentives to instead install solar generation on rooftops which cause less land-use competition, or other renewable energy options.

The github, code, and data repositories from this research have been made available to facilitate more research of this type and to kickstart the creation of a complete, open, and current dataset of the planets solar energy facilities.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Grgory ROOSE / Pixabay

Read more here:
Scientists Mapped Every Large Solar Plant on the Planet Using Satellites and Machine Learning - Singularity Hub

Top Machine Learning Tools Used By Experts In 2021 – Analytics Insight

The amount of data generated on a day-to-day basis is humungous so much so that the term given to identify such a large volume of data is coined as big data. Big data is usually raw and cannot be used to meet business objectives. Thus, transforming this data into a form that is easy to understand is important. This is exactly where machine learning comes into play. With machine learning in place, it is possible to understand the customer demands, their behavioral pattern and a lot more thereby enabling the business to meet its objectives. For this very purpose, companies and experts rely on certain machine learning tools. Here is our find of top machine learning tools used by experts in 2021. Have a look!

Keras is a free and open-source Python library popularly used for machine learning. Designed by Google engineer Franois Chollet, Keras acts as an interface for the TensorFlow library. In addition to being user-friendly, this machine learning tool is quick, easy and runs on both CPU and GPU. Keras is written in Python language and functions as an API for neural networks.

Yet another widely used machine learning tool across the globe is KNIME. It is easy to learn, free and ideal for data reporting, analytics, and integration platforms. One of the many remarkable features of this machine learning tool is that it can integrate codes of programming languages like Java, JavaScript, R, Python, C, and C++.

WEKA, designed at the University of Waikato, in New Zealand is a tried-and-tested solution for open-source machine learning. This machine learning tool is considered ideal for research, teaching I models, and creating powerful applications. This is written in Java and supports platforms like Linux, Mac OS, Windows. It is extensively used for teaching and research purposes and also for industrial applications for the sole reason that the algorithms employed are easy to understand.

Shogun, an open-source and free-to-use software library for machine learning is quite easily accessible for businesses of all backgrounds and sizes. Shoguns solution is entirely in C++. One can access it in other development languages, including R, Python, Ruby, Scala, and more. Everything from regression and classification to Hidden Markov models, this machine learning tool has got you covered.

If you are a beginner then there cannot be a better machine learning tool to start with other than Rapid Miner. It is because of the fact that it doesnt require any programming skills in the first place. This machine learning tool is considered to be ideal for text mining, data preparation, and predictive analytics. Designed for business leaders, data scientists, and forward-thinking organisations, Rapid Miner surely has grabbed attention for all the right reasons.

TensorFlow is yet another machine learning tool that has gained immense popularity in no time. This open-source framework blends both neural network models with other machine learning strategies. With its ability to run on both CPU as well as GPU, TensorFlow has managed to make it to the list of favourite machine learning tools.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

See the rest here:
Top Machine Learning Tools Used By Experts In 2021 - Analytics Insight

An Illustrative Guide to Extrapolation in Machine Learning – Analytics India Magazine

Humans excel at extrapolating in a variety of situations. For example, we can use arithmetic to solve problems with infinitely big numbers. One can question if machine learning can do the same thing and generalize to cases that are arbitrarily far apart from the training data. Extrapolation is a statistical technique for estimating values that extend beyond a particular collection of data or observations. In contrast to extrapolation, we shall explain its primary aspects in this article and attempt to connect it to machine learning. The following are the main points to be discussed in this article.

Lets start the discussion by understanding extrapolation.

Extrapolation is a sort of estimation of a variables value beyond the initial observation range based on its relationship with another variable. Extrapolation is similar to interpolation in that it generates estimates between known observations, but it is more uncertain and has a higher risk of giving meaningless results.

Extrapolation can also refer to a methods expansion, presuming that similar methods are applicable. Extrapolation is a term that refers to the process of projecting, extending, or expanding known experience into an unknown or previously unexperienced area in order to arrive at a (typically speculative) understanding of the unknown.

Extrapolation is a method of estimating a value outside of a defined range. Lets take a general example. If youre a parent, you may recall your youngster calling any small four-legged critter a cat because their first classifier employed only a few traits. They were also able to correctly identify dogs after being trained to extrapolate and factor in additional attributes.

Even for humans, extrapolation is challenging. Our models are interpolation machines, no matter how clever they are. Even the most complicated neural networks may fail when asked to extrapolate beyond the limitations of their training data.

Machine learning has traditionally only been able to interpolate data, that is, generate predictions about a scenario that is between two other, known situations. Because machine learning only learns to model existing data locally as accurately as possible, it cannot extrapolate that is, it cannot make predictions about scenarios outside of the known conditions. It takes time and resources to collect enough data for good interpolation, and it necessitates data from extreme or dangerous settings.

When We use data in regression problems to generalize a function that translates a set of input variables X to a set of output variables y. A y value can be predicted for any combination of input variables using this function mapping. When the input variables are located between the training data, this procedure is referred to as interpolation; however, if the point of estimation is located outside of this region, it is referred to as extrapolation.

The grey and white sections in the univariate example in Fig above show the extrapolation and interpolation regimes, respectively. The black lines reflect a selection of polynomial models that were used to make predictions within and outside of the training data set.

The models are well limited in the interpolation regime, causing them to collapse in a tiny region. However, outside of the domain, the models diverge, producing radically disparate predictions. The absence of information given to the model during training that would confine the model to predictions with a smaller variance is the cause of this large divergence of predictions (despite being the same model with slightly different hyperparameters and trained on the same set of data).

This is the risk of extrapolation: model predictions outside of the training domain are particularly sensitive to training data and model parameters, resulting in unpredictable behaviour unless the model formulation contains implicit or explicit assumptions.

In the absence of training data, most learners do not specify the behaviour of their final functions. Theyre usually made to be universal approximators or as close as possible with few modelling constraints. As a result, in places where there is little or no data, the function has very little previous control. As a result, we cant regulate the behaviour of the prediction function at extrapolation points in most machine learning scenarios, and we cant tell when this is a problem.

Extrapolation should not be a problem in theory; in a static system with a representative training sample, the chances of having to anticipate a point of extrapolation are essentially zero. However, most training sets are not representative, and they are not derived from static systems, therefore extrapolation may be required.

Even empirical data derived from a product distribution can appear to have a strong correlation pattern when scaled up to high dimensions. Because functions are learned based on an empirical sample, they may be able to extrapolate effectively even in theoretically dense locations.

Extrapolation works with linear and other types of regression to some extent, but not with decision trees or random forests. In the Decision Tree and Random Forest, the input is sorted and filtered down into leaf nodes that have no direct relationship to other leaf nodes in the tree or forest. This means that, while the random forest is great at sorting data, the results cant be extrapolated because it doesnt know how to classify data outside of the domain.

A good decision on which extrapolation method to use is based on a prior understanding of the process that produced the existing data points. Some experts have recommended using causal factors to assess extrapolation approaches. We will see a few of them. These are pure mathematical methods one should relate to your problem properly.

Linear extrapolation is the process of drawing a tangent line from the known datas end and extending it beyond that point. Only use linear extrapolation to extend the graph of an essentially linear function or not too much beyond the existing data to get good results. Linear extrapolation produces the function if the two data points closest to the point x* to be extrapolated are (xk-1,yk-1) and (xk,yk).

A polynomial curve can be built using all of the known data or just a small portion of it (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The curve that results can then be extended beyond the available data. The most common way of polynomial extrapolation is to use Lagrange interpolation or Newtons method of finite differences to generate a Newton series that matches the data. The data can be extrapolated using the obtained polynomial.

Five spots near the end of the given data can be used to make a conic section. If the conic section is an ellipse or a circle, it will loop back and rejoin itself when extrapolated. A parabola or hyperbola that has been extrapolated will not rejoin itself, but it may curve back toward the X-axis. A conic sections template (on paper) or a computer could be used for this form of extrapolation.

Further, we will see the simple python implementation of linear extrapolation.

The technique is beneficial when the linear function is known. Its done by drawing a tangent and extending it beyond the limit. When the projected point is close to the rest of the points, linear extrapolation delivers a decent result.

Extrapolation is a helpful technique, but it must be used in conjunction with the appropriate model for describing the data, and it has limitations after you leave the training area. Its applications include predicting in situations where you have continuous data, such as time, speed, and so on. Prediction is notoriously imprecise, and the accuracy falls as the distance from the learned area grows. In situations where extrapolation is required, the model should be updated and retrained to lower the margin of error. Through this article, we have understood extrapolation and its interpolation mathematically and related them with the ML, and seen their effect on the ML system. We have also seen particularly where it fails, and methods that can be used.

Read more from the original source:
An Illustrative Guide to Extrapolation in Machine Learning - Analytics India Magazine

Origami Therapeutics developing treatments for neurodegenerative diseases with ML and computational chemistry – MedCity News

Beth Hoffman, Origami Therapeutics CEO, talks abouthow the company enlists machine learning and computational chemistry to develop ways to treat neurodegenerative diseases, in response to emailed questions.

Why did you start Origami Therapeutics?

I started Origami Therapeutics because I saw an opportunity to develop a different approach to treating neurodegenerative diseases by using protein correctors and degraders. I was at Vertex Pharmaceuticals for more than 7 years, leading their drug discovery efforts, which ultimately led to the development of the current blockbuster cystic fibrosis drugs, Orkambi (lumacaftor/ivacaftor) and Kalydeco (ivacaftor).

Prior to Vertex, I was scientific executive director at Amgen, where I built and guided their neuropsychiatric drug discovery group, a new disease area. I was also elected to the Scientific Advisory Board for Amgen Ventures to evaluate Series A and Series B investment opportunities. I was also previously Head of Neuroscience at Eli Lilly, where I established a new research group and oversaw strategic planning and execution on our novel targets portfolio.

By combining my expertise gained in neuroscience drug development at Amgen and Eli Lilly with my experience at Vertex developing protein correctors for cystic fibrosis, Origami was founded. Leveraging my experience in discovering transformational therapies for cystic fibrosis that modulate CFTR conformation, our focus is to treat neurodegeneration by directly modulating pathogenic proteins.

Beth Hoffman

What does the company do?

Based in San Diego, Origami is developing a pipeline of precision protein degraders and correctors to treat neurodegenerative diseases caused by toxic protein misfolding, beginning with Huntingtons Disease (HD). We are discovering compounds using our precision technology platform Oricision focused on high-value targets, with the potential to deliver more potent therapies and to address the >80% of proteins that evade inhibition and have been undruggable by traditional approaches.

We are also pioneering the adoption of spheroid brain cell cultures, a 3-D cell culture system of multiple brain cell types to create patient-derived cells models of neurological disease. Origami is using machine learning and computational chemistry to optimize small molecules that prevent mutant huntingtin (mHTT) pathology in human neurons.

What sets your company apart?

Most of the industrys current programs in protein degradation are in oncology. At Origami, we are developing a novel pipeline of small molecule, disease-modifying protein degraders and corrector therapeutics for neurodegenerative diseases.

Origamis discovery platform, Oricision, enables the discovery and development of both protein degraders and conformation correctors, allowing us to match the best drug to treat each disease using AI and machine learning driven, patient-derived disease models including brain organoids (or spheroids) to enhance translation to the clinic.

In oncology, companies are targeting a protein that, when eliminated, causes the cancer cell to die. For neurological diseases, we dont want brain cells to die, so we must find a means to reduce the toxic protein in a way that protects and saves a patients nerve cells, preserving healthy, thriving cells and preventing dysfunction.

Origami is taking a fundamentally different approach to protein degradation, a more elegant approach that spares functional proteins while selectively eliminating toxic misfolded forms. Our competitive advantage is were developing a novel pipeline of small molecules that target the underlying cause of disease, beginning with mutant huntingtin protein (mHTT), the only validated target for HD.

Oral delivery enables non-invasive treatment throughout the body, and early peripheral blood-based biomarkers guide timing for brain biomarker assessment. Our lead candidate ORI-113 targets toxic misfolded mHTT for elimination via natural degradation pathways with the goal of sparing HTT function. Conformation correctors prevent/repair protein misfolding, eliminating toxic effects while preserving HTT function.

What specific need are you addressing in healthcare/ life sciences?

Since many neurodegenerative diseases are caused by protein misfolding, there is a significant opportunity to develop drugs that address the underlying cause of the disease using a mechanism that could halt, potentially reverse, and hopefully prevent the disease entirely.

We believe that neuroscience investment is seeing a huge renaissance moment right now, and there is a huge opportunity with increased pharma interest and growth in this space, in areas of significant unmet medical need such as in HD, Alzheimers disease, Parkinsons disease, and other disorders.

At what stage of development is your lead product?

Our lead compound ORI-113 is in pre-clinical development for Huntingtons disease, a huge unmet medical need for patients, where no FDA approved drugs slow, halt, prevent or reverse disease progression. The currently approved drugs only partially treat motor symptoms of HD, with significant side-effects.

We selected HD as our lead indication since it is a monogenic, dominantly inherited fatal neurodegenerative disease characterized by a triad of symptoms: motor, psychiatric and cognitive impairment. Typically diagnosed between 30 and 50 years of age, HD is a systemic disease with dysfunction observed throughout the body, including immune, cardiovascular, digestive and endocrine systems as well as skeletal muscle.

There are large HD patient registries in North America and Europe, so we can select patients at a very precise stage of the disease. In addition to being able to select the right patients for our future studies, we have diagnostics to evaluate how well they respond to the treatment and we can relatively quickly know if our drug is working.

HD is an orphan disease with 71,000 symptomatic patients in the U.S. & Europe and 250,000 individuals at risk for inheriting the gene that causes HD, 50% of whom are anticipated to be gene-positive. The worldwide patient population is estimated at 185,000.

Do you have clinical validation for your product?

Our lead candidate ORI-113 for HD is currently in preclinical development.

What are some milestones you have achieved?

We have conducted a proprietary high throughput screen (HTS), hit expansion/ hit-to-lead, our initial mechanism of action (MoA) studies which show that our molecules suppress mHTT toxicities. Weve also secured a broad IP portfolio. Our team is based in Biolabs San Diego, where we have a wet lab and have built a scientific and research team.

Whats next for the company this year?

We are currently in the process of raising our seed financing round. The seed funding will advance our lead protein degrader into lead optimization for HD and additional programs. Currently, we are selecting the optimal protein degrader to advance into pre-clinical studies for HD and initiating programs for additional indications.

We are evaluating several molecules and select the best one with the aim of choosing a clinical candidate compound in 12-18 months. Besides the degrader molecules, we expect to advance conformation correctors, which restore protein function by fixing the misfolded structures.

Photo: Andrzej Wojcicki, Getty Images

Read more from the original source:
Origami Therapeutics developing treatments for neurodegenerative diseases with ML and computational chemistry - MedCity News

Securing open source software is about process, tools and developers – ITProPortal

Many successful cyberattacks stem from exploiting application vulnerabilities, and having stout network security may not be enough. Regardless of how strong network security may be, hackers can find ways in. Sometimes, they are inside an organizations network and do not exploit a vulnerability for many years. Attacks on vulnerable buffer overflows and code injections can be in the works for a very long time and lead to major data breaches, ransomware, or loss of service.

That leaves organizations with a more difficult task: protecting their systems at the software level. In this way, enterprises can minimize the damage hackers can cause once they have access to a given system. That process starts with securing software at the development level. This is already a topic increasingly high on the agenda for CIOs and CISOs.

In parallel, the vast majority of businesses use open-source software for their development projects because it gives them far more options and libraries that enable rapid innovation. Open source is not inherently any less secure than proprietary software, but there are some specific considerations. The good news is that while security risks are a fact of life, mitigating them through some achievable steps during software development is possible.

While security software tools have an essential role, organizational culture, management, and processes are critical to reducing vulnerabilities. Software development which these days almost always includes open source can rapidly become an unmanageable sprawl. Thats especially true for larger systems, where there are hundreds, sometimes thousands, of libraries (dependencies) and software being introduced by different individuals, often from different locations and without adequate communication between those parties. This is why software development should include compliance processes according to company policies, consistent service level agreements (SLAs), ongoing supervision and technical support (which may be better performed by a third party, since internal expertise is often limited), and a formal open-source selection process that weighs the health and proactive community support before onboarding open source packages.

One of the best aspects of open source is knowledge-sharing, and that extends to security. While one of the arguments against open source is that code is visible to anyone, likewise, so are the fixes to vulnerabilities. There are likely to be regular updates to address new vulnerabilities for open source software with solid community support. However, enterprises must proactively check for those regularly: the community will not reach out to them.

One valuable resource is the National Vulnerability Database (NVD), which is still relevant to developers worldwide while based in the USA. This repository of standards-based vulnerability management data includes databases of security checklist references, security-related software flaws, misconfigurations, and impact metrics. Associated with each vulnerability, there is a Common Vulnerability Exposure (CVE), which aims to identify, define, and catalog publicly disclosed software vulnerabilities based on a Common Vulnerability Scoring System (CVSS). This helps security professionals and developers prioritize the most severe vulnerabilities carrying critical and higher risk.

All these resources are beneficial to developers, but they rely on known vulnerabilities being reported. While that does happen a lot, it is not universal (by the way, that applies to proprietary software, where there is even less information sharing). There is reliance on developers to think about sharing information about the vulnerabilities they have discovered and subsequently fixed.

Furthermore, if a company is successfully exploited and a major data breach occurs, they are currently under no obligation to report the details of how that was achieved. Compare this to the aerospace industry, where there would be a detailed review and analysis of a plane crash. Thus, software security needs its own black box disclosure and accountability. Security attacks can have serious consequences outside the digital world, such as recent breaches leading to the unavailability of power or fuel.

All this points towards a change in attitude towards security, and fortunately, that is beginning to happen. For instance, the Open Source Security Foundation is carrying out some great work and has recently received a $10 million annual commitment from companies including Amazon, Google, Facebook, Microsoft, and others. The more software developers and vendors can get on board with them and provide support, the better the opportunities to protect software in the future, with actions such as vulnerability disclosures and the creation of software bill of materials (BOMs).

Internally, there also needs to be a greater focus on training developers to be more security-aware. Conventionally, security was not part of the developers role: theirs is to create functional code. That must and is starting to change, particularly with the advent of movements such as Shift Left and DevSecOps, whereby testing and security scanning are given more importance early in the development life cycle. However, to reduce the impact on developer workload, those processes must be automated as much as possible. Automation also helps reduce the risk of manual error and keep up with the sheer speed of many projects. Testing and monitoring should enhance, not slow down, development.

Several relevant types of tools are available, both commercial and open-source, including static application security testing (SAST), which involves inspecting and analyzing code even while it is being written to find and stop flaws going into production. SAST tools are like having a security expert looking over a developers shoulder, keeping an eye on potential flaws and vulnerabilities. SAST tools can also assist with compliance with standards.

Perhaps more familiar to many people will be dynamic application security testing (DAST), whereby tests are performed by attacking a running web application from the outside. Testing through the web front-end helps to identify potential security vulnerabilities or architectural weaknesses.

For open source security, software composition analysis (SCA) is a very useful security tool, with several good commercial and open source options. With SCA, the open-source libraries (dependencies) used in the source code of applications are analyzed. By identifying direct dependencies and transitive dependencies, the tool cross-checks against a vulnerability database such as the NVD to determine the existence of vulnerabilities (CVEs) and corresponding CVSS score.

There are, of course, well-established security processes, such as penetration-testing, whereby a white hat hacker tries to get into an organization's networks and applications to discover potential exploits and vulnerabilities. These have a lot of benefits, but they are not enough on their own. It is like having locks on a door: the more, the better but eventually, a talented thief will find a way in. What matters is making sure that when they do, valuable assets are not within their reach.

Whether open-source or proprietary, tackling application security is a complex challenge. For companies who want to improve that security, streamlining processes, instilling cultural attitudes, adding security training, and using the right tools is a good place to start.

Javier Perez, Chief Evangelist for Open Source and API Management, Perforce Software

Excerpt from:
Securing open source software is about process, tools and developers - ITProPortal

Linux Foundation’s Open Source Climate Welcomes Airbus, EY and Red Hat – PRNewswire

SAN FRANCISCO, Nov. 2, 2021 /PRNewswire/ -- The Linux Foundation's OS-Climatehas announced that Airbus and EY have joined its cross-industry coalition seeking to accelerate the global transition to net zero through open data and open source decision-making tools for companies, investors, banks, and regulators. This follows the news in September that Red Hat (an IBM company) had joined and is contributing a world-class team of data scientists and developers to build the OS-Climate platform. Also announced is Airbus' contribution of a scenario analysismodeling platform to analyze the clean energy transition.

"Every corporation and financial industry player is faced with major decisions that must quantitatively factor in scenarios of physical climate impacts and of the economic transition to net zero," said Nicolas Chretien, Head of Sustainability & Environment at Airbus. "The aerospace industry, like many other sectors, is engaged in a transition which involves a deeper reorganization of its ecosystem. As such, it requires effective data and tools to better understand, assess and model interdependencies linked to climate risks and opportunities. I encourage other companies to join us, especially those who, like us, are committed to foster climate transition through innovation."

"Many promising governmental and business efforts are underway to drive climate-aligned finance and investment," saidSteve Varley, Global Vice Chair Sustainability, EY. "We believe OS-Climate's transparent governance, enablement of large-scale multi-stakeholder collaboration, and exceptional community of contributors will be a game-changer at this moment of urgency. EY teams are looking forward to collaborating."

"Overcoming the complex data and analytics barriers to scaling up investment in clean energy and resilience is more than any one company or firm can achievealone. We are delighted that Airbus and EY are bringing their formidable capabilities to jointly build the common, pre-competitive foundation of technology and data that the entire business and finance community needs, and on top of which they can more quickly innovate and compete," said Truman Semans, Executive Director of OS-Climate.

OS-Climate is a collaborative, member-driven, non-profit platform hosted by the Linux Foundation for the development of open data and open source analytics for climate risk management and climate-aligned finance and investing. Membership has more than tripled since September 2020 from initial founders Allianz, Amazon, Microsoft, and S&P Global to include Premium Members BNP Paribas, Goldman Sachs, and KPMG, and General Members Federated Hermes, London Stock Exchange Group, the Net Zero Asset Owner Alliance, Ortec Finance, and Red Hat (an IBM company).

OS-Climate's Data Commons and Analytics will accelerate investment in low-carbon and resilient infrastructure for power generation, petrochemicals, manufacturing, buildings, and municipalities, as well as energy-intensive products such as aircraft and other transportation vehicles. The platform will also accelerate development of innovative financial products to better channel capital into these areas.

Airbus Makes a Major Contribution with an Open Source Modeling Platform to Accelerate the Clean Energy Transition

Since Airbus knows the power of open source collaboration as a force multiplier and accelerator of technical solutions to complex problems, it isopen-sourcing a modeling platform, developed to better understand the clean energy transition in the aviationindustry. Working together with other OS-Climatemembers and partners in the academic community on Integrated Assessment Modeling and other fields, Airbusaims to expand this to enable climate-smart decisions across many other industries.

Jim Zemlin, Executive Director of The Linux Foundation, said, "We are very pleased that an open source-savvy company like Airbus is contributing not only its valuable intellectual property but also a 15-person team of modelers and engineers as well as its experience in projects including Linux Foundation's Hyperledger." Zemlin added, "With a flurry of initiatives claiming to be 'open source,' it's essential for everyone in the climate space to watch for 'openwash' and combat the trend of locking up intellectual property that could best accelerate climate solutions through open collaboration."

EY Brings Unique Strengths to the OS-Climate Community

"The Climate Biennial Exploratory Scenario (CBES) Pilot will shape the way dozens of central banks in the Network for the Greening of the Financial System (NGFS) integrate climate in stress testing by banks and other financial institutions," states Mike Zehetmayr, EY Global Sustainable Finance Data and Technology Leader

"We look forward to working with the OS-Climate community in helping the industry to overcome data gaps, and in making it easierforfinancial institutions and corporate counterparties to disclose data, and in developing open source scenario analysis tools,"added Brandon Sutcliffe, EY Americas Sustainable Finance Leader.

OS-Climate Event at COP26

On 8 November, during the UN Climate Negotiations in Glasgow, OS-Climate will host an in-person andvirtual event to demonstrate progress in building the OS-Climate Platform. Moderated by United Airlines Board of Directors Member Jim Whitehurst, Airbus will reveal its SoSTrades models/WITNESS.

OS-C members will also present the initial Data Commons, an Implied Temperature Rise Tool for aligning investment and loan portfolios with Paris Accord targets, and a Physical Risk Tool for analyzing vulnerability to extreme heat, flood, drought, and other extreme threats and for enabling investments in resilience. PRI CEO Fiona Reynolds will open the event, former US Federal Reserve Board Governor Sarah Bloom Raskin will lead a panel of experts discussing how open data and analytics can accelerate climate policy efforts globally.

About Airbus

Airbus pioneers sustainable aerospace for a safe and united world. The Company constantly innovates to provide efficient and technologically-advanced solutions in aerospace, defense, and connected services. In commercial aircraft, Airbus offers modern and fuel-efficient airliners and associated services. Airbus is also a European leader in defense and security and one of the world's leading space businesses. In helicopters, Airbus provides the most efficient civil and military rotorcraft solutions and services worldwide.

About EY

EY exists to build a better working world, helping create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

EY refers to the global organization and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation are available via ey.com/privacy. EY member firms do not practice law where prohibited by local laws. For more information about our organization, please visit ey.com.

About Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 2,160 members and is the world's leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation's projects are critical to the world's infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation's methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

Airbus Media Contact:Daniel WerdungTel: +49 40 743 59078

EY Media Contact: Brendan Beaver[emailprotected]

OS-Climate Media Contact:Truman SemansTEL: +1-919-599-3660

SOURCE The Linux Foundation

http://www.linuxfoundation.org

Read the rest here:
Linux Foundation's Open Source Climate Welcomes Airbus, EY and Red Hat - PRNewswire

Cloud, open source come to the fore – IT-Online

Kathy Gibson reports from Red Hat Summit A move to cloud computing and open source solutions is underway as CIOs rethink their IT systems in the wake of the pandemic.

Jonathan Tullett, senior research manager: cloud/IT services at IDC South Africa & Sub-Saharan Africa, points out the CIOs are prioritising business continuity and the accessibility of business processes during 2020/21.

In this environment, cloud investment is increasing, with plenty of room for future growth.

Tullett points out that cloud software growth showed almost 30% year on year growth, while on-premise software revenue is just 1,4%.

And we know there is a lot of headroom there, he says. We know CIOs are committed to multi-cloud deployment, and only 4% are using them today.

Applications are still being deployed in a siloed approach, he adds, although we are seeing a shift to more integrated systems.

Virtualisation is almost a given, with most CIOs using virtualisation in one form or another, Tullett says. API management is also pretty common.

However, microservice, serverless computing, Kubernetes orchestration, Docker orchestration and Docker containerisation are less well represented.

But as organisations shift to public cloud, we will start to see more and more of these tools used in production.

So there is still a lot of work that needs to happen, but there is a lot of growth and energy going into cloud.

When it comes to open source technologies being used by CIOs, networking tops the list at 51%, followed by databases at 47%, IT infrastructure or operations management at 46%, security at 38%, cloud management or deployment at 32%, big data and analytics at 31%, and application development at 29%.

Security risks (46%) and reliability of software (40%) are the biggest factors holding CIOs back from engaging more in open source technologies.

These risks are not unrealistic concerns, Tullett points out. With any new technology there is a concern, not only about quality, but also compromises in support and longevity, or lack of contractual obligation.

A lack of operational immediate support was cited by 37% or CIOs, followed by ensuring contractual responsibility (36%), lack of long term support or availability (34%), regulatory compliance requirements (32%), incompatibility with existing applications or infrastructure (30%), lack of skills within the organisation (28%), not perceiving or understanding the benefit of open source over commercial alternatives (27%), poor quality of code (23%), and insufficient documentation or training materials (23%).

Tullett says IDC offers CIOs a set of key pointers for investing in open source software.

Demand the same enterprise grade support and service for open source software as you would form any other technology, he says. You shouldnt treat them any differently and dont make any compromises.

At the same time, CIOs are urged to demand the same openness and responsiveness in proprietary technology as you would in open source.

As we move more into the cloud, and see more focus on integrating the infrastructure, it is very important to integrate and orchestrate products.

In fact, I would exclude solutions that dont include API support.

CIOs need to thing long-term and short-term simultaneously, Tullett adds. Look at solving todays problems with tomorrows tools. And build with integration, automation and intelligence in mind.

He advises that CIOs aim for mature cloud usage beyond lift-and-shift, rather refactoring and building bridges between silos.

At the same time, they are urged to target aggressive but measurable business outcomes as objectives, something that can only be done in close alignment with technology partners.

Related

Originally posted here:
Cloud, open source come to the fore - IT-Online