This Nobel-Winning Economist Predicted Bitcoin’s Formidable Rise in 1991 – Bitcoinist

Almost 30 years ago, Nobel Prize-winning American economist Milton Friedman said he would like to have money controlled by a computer. He also said it would be a better world without the Federal Reserve. One of his two desires is already happening in the form of Bitcoin. In fact, he seems to have predicted its formidable rise in 1991. And with the FEDs incessant money printing causing growing criticismis it a question of time before the other one comes true as well?

Its almost eerie watching this short clip in which Friedman appears to talk about Bitcoin, an invention that would come some 18 years later.

As the main advocate opposing the Keynesian government policies in place today, Friedman promoted a macroeconomic viewpoint known as monetarism. Rather than the FED stepping in and printing money as they see fit, he argued that there should be a slow and steady expansion of money supply.

With the historic bailouts and QE that we see going on today in response to the coronavirus pandemic, Friedman would probably be turning in his grave. He expressed his desire back in 1991 to have money controlled by a computer that could not interfere and adjust the policy at will.

The video clip was posted on the Bitcoin Twitter channel and naturally garnered scores of likes, retweets, and applause. Some of the comments said:

He would prolly be all in with Bitcoin if he was still alive

And others stated:

Well make it happen, Milton

Of course, with Bitcoin being the most successful experiment of tamper-proof decentralized money running across computers (nodes), its easy to forget that there were precursors to Bitcoin.

David Chaum released DigiCash in 1989 which made use of cryptography for private payments and introduced the concept of public and private keys. The project garnered support from libertarians and small groups in favor of a digital currency that could be transferred internationally free from government control.

While DigiCash and other projects pre-Bitcoin failed to gain traction,Friedman was no stranger to the fact that there was a need for electronic money. He believed that it would happen in the future. In fact, in that same year, he said:

One thing we are still lacking and will soon develop is reliable e-cash a method by which money can be transferred from A to B on the Internet without A knowing B and vice versa

With Bitcoin proposing a viable alternative to fiat and entirely free from central actors; will Friedmans second desire come true as well? Will the FED be removed completely? Its going to be interesting to how things unfold

Read the original:
This Nobel-Winning Economist Predicted Bitcoin's Formidable Rise in 1991 - Bitcoinist

Bitcoin and the Folly of the Safe-Haven Trade – Traders Magazine

By Philippe Bekhazi, Co-founder and CEO of XBTO Group.

While we are by no means back to normalcy and the timetable is still murky as to when we resume business as usual, enough time has passed to hold a mirror up to how bitcoin responded to the initial Coronavirus shock and most pronounced market turmoil.

Firstly, any crisis whether induced by biological, geopolitical or financial shocks always results in a crisis of liquidity. Period. Peter sells some stock to pay for a loss on crypto or vice versa. Paul sells high yield bonds to buffer up MBS positions. Or simply selling winners and going to cash and short-term treasury instruments.

The vicious cycle spins and the same horror movie plays, as over-levered investors (mainly denominated in USD) need to sell indiscriminately and across asset classes. The rush to liquidity and the comfort of cash drives all correlations to one, despite promises and proselytization that some instruments are immune to this inevitable selling tsunami.

Therefore, in the most recent installment, when the VIX went to 80, and global uncertainty and fear on steroids reigned supreme, those over-levered pools of capital got flushed out again and red sprayed across our trading screens; regardless of the fundamental constructs and theses underpinning any asset classes or individual names. So, to hammer home the point once again, in any risk off environment there is no safe haven, especially not one traded as thinly and with as much speculation as bitcoin.

Holding the x-ray up to the light

Lets take an objective look at the state of bitcoin, how it fared during peak crisis, and its role in a portfolio anchored by a long-term view and mandate.

Firstly, bitcoin is not a deep market at all and with a market cap of around $120 billion equates to that of Tesla (TSLA). It is also driven, currently, in the short term, more by over-levered speculators than by long-term holders, hence its volatility, which (while gravely exacerbated with the crisis), had vacillated with large peak to trough moves of 10,000 to 3,500 (on some exchanges) in 2020 alone [and before Coronavirus was even a driving factor]. In contrast to a safe-haven cushion that would zig while the markets zagged, bitcoin dropped over 50% while the S&P dropped 30%.

While many speculators have indeed been flushed out by employing reckless amounts of leverage, savvy trading outfits and long-term investors continue to hold. They do so for the same fundamental reasons as to why they built exposure in the first place, a thesis that has been starkly reinforced through unprecedented Central Bank action worldwide, including the US Feds QE to infinity stance.

So, for the true believers in bitcoin, not only have the fundamentals not changed, but they have become more pronounced and engraved into the investment psyche. Moreover, in contrast to erratic and impossible to predict monetary policies, there are more knowns within bitcoin protocols, and an inability to put the printing presses on autopilot. On the contrary, a halvening in mid-May will result in less availability and scarcity as the block reward will fall from 12.5BTC per block to 6.25 (a block is roughly mined every 10 minutes), so a greater worth is placed on each unit, or coin similar to less mining equating to higher demand for existing reserves of gold, also touted as an asset to hold in unhinged times.

Bitcoin versus Gold

Now that we have morphed from an initial liquidity crisis into the potential next phase of a crisis (usually credit crisis, but could take on others forms), we can talk more rationally about safe haven assets and their long-term role in providing diversification and a non-correlating return stream to traditional equity and bond portfolios.

Many talking heads myopically preach either gold or bitcoin as the sole answer in dire times, often taking to public forums to attack each other and create a schism between either camp. As noted, neither acts as promised (at least initially), and there is no need to play a zero-sum game here. One could argue the merits of holding both as diversification tools and alternate stores of value, each with their own idiosyncratic benefits and use cases.

While gold certainly has a deep history narrating its utility and potential worth, bitcoin has more attractive, contemporary features, such as no physical delivery, no storage, and greater immutability and real-life payment applications. All valid reasons why many have made its case as a replacement to gold.

My view is that institutional allocators and stewards of capital should have exposure to both, placing at least 50% of their current gold holdings (usually 1-2% of an overall portfolio) into bitcoin. Those taking a 10-year outlook will see the non-correlation benefits to their portfolios and their participation (alongside the continued weeding out of weak players) will also smooth out volatility as larger tickets and block trades counterbalance shorter-term trading strategies.

While perhaps more evident to a newer generation embracing a more futuristic mind-set, bitcoin also provides exposure to an underlying network effect the value of which is not currently priced in, built on the premise of more decentralization, the utility and staying power of blockchain technologies and emerging tokens of value (stablecoins, security tokens etc.) that will coalesce to eventually make the asset truly reflect and catch up to the sum of its parts.

What does not kill you makes your stronger

While the current crisis engulfing our daily lives and the global economy is eerily unique, many of the lessons learned ring true from prior market disruptions and dislocations:

Sophisticated traders and long-term institutional investors alike should be easing their way in and taking a nibble at this digital diversifier, especially against the macro backdrop of irreversible currency debasement.

Moreover, structural positives did arise from this latest stress test, with the crypto infrastructure holding up and proving its mettle across custody, trading and execution which all came together to function in a global 24/7 environment that is very different from traditional market machinations.

The ecosystem is evolving and getting stronger and with that we need to advance the mindset, rationale for investing, and pools of incoming capital to strengthen the asset class.

The views represented in this commentary are those of its author and do not reflect the opinion of Traders Magazine, Markets Media Group or its staff. Traders Magazine welcomes reader feedback on this column and on all issues relevant to the institutional trading community.

Read the original here:
Bitcoin and the Folly of the Safe-Haven Trade - Traders Magazine

10 developer skills on the rise and five on the decline – TechCentral.ie

Image: StockXpert

Heres how to ensure your programming chops stay sharp

Print

Read More: GraphQL Pytorch skills software development training

Technology is constantly evolving and so, too, are the developer skills employers look for to make the most of what is emerging and what is solidifying its place in the enterprise.

As companies dive deeper into digital transformations and pivot to data-driven cultures, tech disciplines such as AI, machine learning, internet of things (IoT) and IT automation are driving organisations technology strategies and boosting demand for skills with tools, such as Docker, Ansible and Azure, that will help companies innovate and stay competitive in rapidly changing markets.

What were seeing is companies developing internal skill maps within their developer organisations so they can see what skills they have, and where they need to grow, says Vivek Ravisankar, CEO and co-founder of HackerRank. Theyre building these competency frameworks to find their skill gaps and then put in place training and education to close those.

Understanding which disciplines and skills are up-and-coming and which are fading can help both companies and developers ensure they have the right skills and knowledge to succeed. And what better way to find that out than to mine developer job postings.

Indeed.com analysed job postings using a list of 500 key technology skill terms to see which ones employers are looking for more these days and which are falling out of favour. Such research has helped identify cutting-edge skills over the past five years, with some previous years risers now well establish, thanks to explosive growth.

Docker, for one, has risen more than 4,000% in the past five years and was listed in more than 5% of all US tech jobs in 2019.IoT as well has shot up nearly 2,000% in the past half-decade, with Ansible an IT automation, configuration management, and deployment tool and Kafka a tool for building real-time data pipelines and streaming apps showing similarly strong growth. And, of course, therise of data sciencehas also since cemented high demand for a range of skills, including artificial intelligence, machine learning, and data analysis.

Developers looking to add new skills to their repertoire should pay close attention to the most recent upticks in skills demand that Indeed identified from September 2018 to September 2019 and those falling out of favour as outlined below. Each skill is accompanied by average annual salary information for developers who possess these skills, according to PayScale.com.

Pytorch is an open-source machine-learning library written in Python, C++ and CUDA. It is used for applications such as computer vision and natural language processing. While primarily developed by Facebooks AI Research Lab, it is offered free under the modified BSD license.

Rate of growth, 2018-2019:+138%

Average salary:$118,000

GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries in existing data sets. GraphQL was originally developed for internal use by Facebook but was released for public use in 2015 under the GraphQL Foundation, hosted by the non-profit Linux Foundation. GraphQL supports reading, writing and subscribing to changes in data, and servers are available for multiple languages, including Haskell, JavaScript, Perl, Python, Ruby, Java, C#, Scala, Go, Elixer, Erlang, PHP, R and Clojure.

Rate of growth, 2018-2019:+80%Average salary:$97,000

Kotlin is a cross-platform, statically typed, general-purpose programming language that is designed to interoperate with Java. The Java Virtual Machine (JVM) version of its standard library, in fact, depends on the Java Class Library, though Kotlins syntax is more concise than that of Java. In May of 2019, Google announced that the Kotlin language is now its preferred language for Android developers and has been included as an alternative to the standard Java compiler since the release of Android Studio 3.0 in 2017.

Rate of growth, 2018-2019:+76%Average salary:$99,000

Vue is a progressive, incrementally adoptable JavaScript framework for building user interfaces on the Web. It allows users to extend HTML with attributes (called directives) that offer increased functionality to HTML applications through either built-in or user-defined directives.

Rate of growth, 2018-2019:+72%Average salary:$116,000

.Net Core is a free, open-source, managed software framework for Windows, Linux and macOS. It is a cross-platform successor to Microsofts proprietary .NET Framework and is released for use under the MIT License. It is primarily used in the development of desktop application software, AI/machine learning and IoT applications.

Rate of growth, 2018-2019:+71%Average salary:$87,000

Formerly Looker Data Sciences, Looker is a data exploration and discovery business intelligence platform that was acquired by Google Cloud Platform in 2019. Lookers modelling language, LookML, enables data teams to define relationships in their database so business users can explore, save and download data without needing to knowSQL. Looker was the first commercially available BIplatform built for and aimed at scalable or MPRDBM (massively parallel relational database management system) such as Amazons Redshift, Google BigQuery, HP Vertica, Netezza and Teradata.

Rate of growth, 2018-2019:+68%Average salary:$68,000

HashiCorps Terraform is open-source infrastructure-as-codesoftware that allows users to define and provision a data centre using theproprietary, high-level configuration language Hashicorp Configuration Language(HCL) or JSON. Terraform supports a number of cloud infrastructure providers,including Amazon AWS, IBM Cloud, Google Cloud Platform, DigitalOcean, MicrosoftAzure, and more.

Rate of growth, 2018-2019:+66%Average salary:$104,000

Googles suite of cloud computing services runs of the same infrastructure used for Googles end-user products, and includes a set of management tools, modular cloud services such as computing data storage, data analytics and machine learning. The platform provides infrastructure as a service, platform as a service and serverless computing environments to customers, as well as Googles App Engine, which allows for development and hosting web applications in Google-managed data centres.

Rate of growth, 2018-2019:+62%Average salary:$191,000

Originally designed by Google, Kubernetes (sometimes abbreviated as K8s) is an open-source container orchestration system for automating application deployment, scaling and management. Kubernetes provides a platform for application container automation, deployment, scaling and operation across clusters of hosts.

Rate of growth, 2018-2019:+61%Average salary:$115,000

Spring Boot is an open-source, Java-based integration framework used to create microservices and to build stand-alone and production-ready Spring applications. Spring Boot is built on the Spring framework and gives developers a platform on which to jumpstart development of Spring applications. Spring Boot uses pre-configured, injectable dependencies to speed up development and save developers time.

Rate of growth, 2018-2019:+58%Average salary:$78,000

As fast as some tech skills rise, others fall. Five skills that dropped off significantly in the year between 2018 and 2019 are:

The free, open-source Web browser from the Mozilla foundation has seen its popularity wane in recent years; developers with these skills may also find they are not in demand.

Rate of growth, 2018-2019:-47%

Open source software from HashiCorp for building and maintaining portable virtual software development environments, Vagrant tries to simplify configuration management of virtual environments.

Rate of growth, 2018-2019:-41%

Skills related to Googles web browser have also decreased in popularity between 2018 and 2019.

Rate of growth, 2018-2019: -33%

Production of optics systems have seen a steep decline of late.

Rate of growth, 2018-2019:-33%

The Global System for Mobiles is an older telecommunications standard for mobile phones, which could explain why it has decreased in popularity.

Rate of growth, 2018-2019: -26%

IDG News Service

Read More: GraphQL Pytorch skills software development training

View post:
10 developer skills on the rise and five on the decline - TechCentral.ie

Yes, Section 215 Expired. Now What? – EFF

On March 15, 2020, Section 215 of the PATRIOT Acta surveillance law with a rich history of government overreach and abuseexpired. Along with two other PATRIOT Act provisions, Section 215 lapsed after lawmakers failed to reach an agreement on a broader set of reforms to the Foreign Intelligence Surveillance Act (FISA).

In the week before the law expired, the House of Representatives passed theUSA FREEDOM Reauthorization Act, without committee markup or floor amendments, which would have extended Section 215 for three more years, along with some modest reforms.

In order for any bill to become law, the House and Senate must pass an identical bill, and the President must sign it. That didnt happen with the USA FREEDOM Reauthorization Act. Instead, knowing the vote to proceed with the Houses bill in the Senate without debating amendments was going to fail, Senator McConnell brought a bill to the floor that would extend all the expiring provisions for another 77 days, without any reforms at all. Senator McConnell's extension passed the Senate without debate.

But the House of Representatives left town without passing Senator McConnells bill, at least until May 12, 2020, and possibly longer. That means that Section 215 of the USA PATRIOT Act, along with the so-called lone wolf and the roving wiretap provisions have expired, at least for a few weeks.

EFF has argued that if Congress cant agree on real reforms to these problematic laws, they should be allowed to expire. While we are pleased that Congress didn't mechanically reauthorize Section 215, it is only one of a number of largely overlapping surveillance authorities. The loss of the current version of the law will still leave the government with a range of tools that is still incredibly powerful. These include other provisions of FISA as well as surveillance authorities used in criminal investigations, many of which can include gag orders to protect sensitive information.

In addition, the New York Times and others have noted that Section 215s expiration clause contains an exception permitting the intelligence community to use the law for investigations that were ongoing at the time of expiration or to investigate offenses or potential offenses that occurred before the sunset. Broad reliance on this exception would subvert Congresss intent to have Section 215 truly expire, and the Foreign Intelligence Surveillance Court should carefullyand publiclycircumscribe any attempt to rely on it.

Although Section 215 and the two other provisions have expired, that doesnt mean theyre gone forever. For example, in 2015, during the debate over the USA FREEDOM Act, these same provisions were also allowed to expire for a short period of time, and then Congress reauthorized them for another four years. While transparency is still lacking in how these programs operate, the intelligence community did not report a disruption in any of these critical programs at that time. If Congress chooses to reauthorize these programs in the next couple of months, its unlikely that this disruption will have a lasting impact.

The Senate plans to vote on a series of amendments to the House-passed USA FREEDOM Reauthorization Act in the near future. Any changes made to the bill would then have to be approved by the House and signed by the President. This means that Congress has the opportunity to discuss whether these authorities are actually needed, without the pressure of a ticking clock.

As a result, the House and the Senate should take this unique opportunity to learn more about these provisions and create additional oversight into the surveillance programs that rely on them. The expired provisions should remain expired until Congress enacts the additional, meaningful reforms weve been seeking.

You can read more about what EFF is calling for when it comes to reining in NSA spying, reforming FISA, and restoring Americans privacy here.

Link:
Yes, Section 215 Expired. Now What? - EFF

si2 Launches Survey on Artificial Intelligence and Machine Learning in Eda – AiThority

Silicon Integration Initiativehas launched an industry-wide survey to identify planned usage and structural gaps for prioritizing and implementing artificial intelligence and machine learning in semiconductor electronic design automation.

The Si2 platform provides a unique opportunity for semiconductor companies, EDA suppliers and IP providers to voice their needs and focus resources on common solutions, including enabling and leveraging university research.

The survey is organized by a recently formed Si2 Special Interest Group chaired by Joydip Das, senior engineer, Samsung Electronics, and co-chaired by Kerim Kalafala, senior technical staff member, EDA, and master inventor, IBM. The 18-member group will identify where industry collaboration will help eliminate deficiencies caused by a lack of common languages, data models, labels, and access to robust and categorized training data.

Recommended AI News:Artio Medical Appoints Jeff Weinrich to Board of Directors

This SIG is open to all Si2 members. Current members include:

Advanced Micro DevicesANSYSCadence Design SystemsHewlett Packard EnterpriseIBMIntelIntento DesignKeysight TechnologiesMentor, a Siemens Business

NC State UniversityPFD Solutions

Qualcomm

Samsung Electronics

Sandia National LaboratoriesSilvaco

SynopsysThrace SystemsTexas Instruments

The survey is open April 15 May 15.

Leigh Anne Clevenger, Si2 senior data scientist, said that the survey results would help prioritize SIG activities and timelines. The SIG will identify and develop requirements for standards that ensure data and software interoperability, enabling the most efficient design flows for production, Clevenger said. The ultimate goal is to remove duplicative work and the need for data model translators, and focus on opening avenues for breakthroughs from suppliers and users alike.

Recommended AI News:Ligandal Is Developing Potential Antidote And Vaccine To SARS-CoV-2

High manufacturing costs and the growing complexity of chip development are spurring disruptive technologies such as AI and ML, Clevenger explained. The Si2 platform provides a unique opportunity for semiconductor companies, EDA suppliers and IP providers to voice their needs and focus resources on common solutions, including enabling and leveraging university research.

View original post here:
si2 Launches Survey on Artificial Intelligence and Machine Learning in Eda - AiThority

New AI improves itself through Darwinian-style evolution – Big Think

Machine learning has fundamentally changed how we engage with technology. Today, it's able to curate social media feeds, recognize complex images, drive cars down the interstate, and even diagnose medical conditions, to name a few tasks.

But while machine learning technology can do some things automatically, it still requires a lot of input from human engineers to set it up, and point it in the right direction. Inevitably, that means human biases and limitations are baked into the technology.

So, what if scientists could minimize their influence on the process by creating a system that generates its own machine-learning algorithms? Could it discover new solutions that humans never considered?

To answer these questions, a team of computer scientists at Google developed a project called AutoML-Zero, which is described in a preprint paper published on arXiv.

"Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML," the paper states. "Innovation is also limited by having fewer options: you cannot discover what you cannot search for."

Automatic machine learning (AutoML) is a fast-growing area of deep learning. In simple terms, AutoML seeks to automate the end-to-end process of applying machine learning to real-world problems. Unlike other machine-learning techniques, AutoML requires relatively little human effort, which means companies might soon be able to utilize it without having to hire a team of data scientists.

AutoML-Zero is unique because it uses simple mathematical concepts to generate algorithms "from scratch," as the paper states. Then, it selects the best ones, and mutates them through a process that's similar to Darwinian evolution.

AutoML-Zero first randomly generates 100 candidate algorithms, each of which then performs a task, like recognizing an image. The performance of these algorithms is compared to hand-designed algorithms. AutoML-Zero then selects the top-performing algorithm to be the "parent."

"This parent is then copied and mutated to produce a child algorithm that is added to the population, while the oldest algorithm in the population is removed," the paper states.

The system can create thousands of populations at once, which are mutated through random procedures. Over enough cycles, these self-generated algorithms get better at performing tasks.

"The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms," Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.

If computer scientists can scale up this kind of automated machine-learning to complete more complex tasks, it could usher in a new era of machine learning where systems are designed by machines instead of humans. This would likely make it much cheaper to reap the benefits of deep learning, while also leading to novel solutions to real-world problems.

Still, the recent paper was a small-scale proof of concept, and the researchers note that much more research is needed.

"Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent... multiplicative interactions. These results are promising, but there is still much work to be done," the scientists' preprint paper noted.

Related Articles Around the Web

Continued here:
New AI improves itself through Darwinian-style evolution - Big Think

Research Team Uses Machine Learning to Track Covid-19 Spread in Communities and Predict Patient Outcomes – The Ritz Herald

The COVID-19 pandemic is raising critical questions regarding the dynamics of the disease, its risk factors, and the best approach to address it in healthcare systems. MIT Sloan School of Management Prof. Dimitris Bertsimas and nearly two dozen doctoral students are using machine learning and optimization to find answers. Their effort is summarized in the COVIDanalytics platform where their models are generating accurate real-time insight into the pandemic. The group is focusing on four main directions; predicting disease progression, optimizing resource allocation, uncovering clinically important insights, and assisting in the development of COVID-19 testing.

The backbone for each of these analytics projects is data, which weve extracted from public registries, clinical Electronic Health Records, as well as over 120 research papers that we compiled in a new database. Were testing our models against incoming data to determine if it makes good predictions, and we continue to add new data and use machine-learning to make the models more accurate, says Bertsimas.

The first project addresses dilemmas at the front line, such as the need for more supplies and equipment. Protective gear must go to healthcare workers and ventilators to critically ill patients. The researchers developed an epidemiological model to track the progression of COVID-19 in a community, so hospitals can predict surges and determine how to allocate resources.

The team quickly realized that the dynamics of the pandemic differ from one state to another, creating opportunities to mitigate shortages by pooling some of the ventilator supply across states. Thus, they employed optimization to see how ventilators could be shared among the states and created an interactive application that can help both the federal and state governments.

Different regions will hit their peak number of cases at different times, meaning their need for supplies will fluctuate over the course of weeks. This model could be helpful in shaping future public policy, notes Bertsimas.

Recently, the researchers connected with long-time collaborators at Hartford HealthCare to deploy the model, helping the network of seven campuses to assess their needs. Coupling county level data with the patient records, they are rethinking the way resources are allocated across the different clinics to minimize potential shortages.

The third project focuses on building a mortality and disease progression calculator to predict whether someone has the virus, and whether they need hospitalization or even more intensive care. He points out that current advice for patients is at best based on age, and perhaps some symptoms. As data about individual patients is limited, their model uses machine learning based on symptoms, demographics, comorbidities, lab test results as well as a simulation model to generate patient data. Data from new studies is continually added to the model as it becomes available.

We started with data published in Wuhan, Italy, and the U.S., including infection and death rate as well as data coming from patients in the ICU and the effects of social isolation. We enriched them with clinical records from a major hospital in Lombardy which was severely impacted by the spread of the virus. Through that process, we created a new model that is quite accurate. Its power comes from its ability to learn from the data, says Bertsimas.

By probing the severity of the disease in a patient, it can actually guide clinicians in congested areas in a much better way, says Bertsimas.

Their fourth project involves creating a convenient test for COVID-19. Using data from about 100 samples from Morocco, the group is using machine-learning to augment a test previously designed at the Mohammed VI Polytechnic University to come up with more precise results. The model can accurately detect the virus in patients around 90% of the time, while false positives are low.

The team is currently working on expanding the epidemiological model to a global scale, creating more accurate and informed clinical risk calculators, and identifying potential ways that would allow us to go back to normality.

We have released all our source code and made the public database available for other people too. We will continue to do our own analysis, but if other people have better ideas, we welcome them, says Bertsimas.

Original post:
Research Team Uses Machine Learning to Track Covid-19 Spread in Communities and Predict Patient Outcomes - The Ritz Herald

Model quantifies the impact of quarantine measures on Covid-19’s spread – MIT News

The research described in this article has been published on a preprint server but has not yet been peer-reviewed by scientific or medical experts.

Every day for the past few weeks, charts and graphs plotting the projected apex of Covid-19 infections have been splashed across newspapers and cable news. Many of these models have been built using data from studies on previous outbreaks like SARS or MERS. Now, a team of engineers at MIT has developed a model that uses data from the Covid-19 pandemic in conjunction with a neural network to determine the efficacy of quarantine measures and better predict the spread of the virus.

Our model is the first which uses data from the coronavirus itself and integrates two fields: machine learning and standard epidemiology, explains Raj Dandekar, a PhD candidate studying civil and environmental engineering. Together with George Barbastathis, professor of mechanical engineering, Dandekar has spent the past few months developing the model as part of the final project in class 2.168 (Learning Machines).

Most models used to predict the spread of a disease follow what is known as the SEIR model, which groups people into susceptible, exposed, infected, and recovered. Dandekar and Barbastathis enhanced the SEIR model by training a neural network to capture the number of infected individuals who are under quarantine, and therefore no longer spreading the infection to others.

The model finds that in places like South Korea, where there was immediate government intervention in implementing strong quarantine measures, the virus spread plateaued more quickly. In places that were slower to implement government interventions, like Italy and the United States, the effective reproduction number of Covid-19 remains greater than one, meaning the virus has continued to spread exponentially.

The machine learning algorithm shows that with the current quarantine measures in place, the plateau for both Italy and the United States will arrive somewhere between April 15-20. This prediction is similar to other projections like that of the Institute for Health Metrics and Evaluation.

Our model shows that quarantine restrictions are successful in getting the effective reproduction number from larger than one to smaller than one, says Barbastathis. That corresponds to the point where we can flatten the curve and start seeing fewer infections.

Quantifying the impact of quarantine

In early February, as news of the virus troubling infection rate started dominating headlines, Barbastathis proposed a project to students in class 2.168. At the end of each semester, students in the class are tasked with developing a physical model for a problem in the real world and developing a machine learning algorithm to address it. He proposed that a team of students work on mapping the spread of what was then simply known as the coronavirus.

Students jumped at the opportunity to work on the coronavirus, immediately wanting to tackle a topical problem in typical MIT fashion, adds Barbastathis.

One of those students was Dandekar. The project really interested me because I got to apply this new field of scientific machine learning to a very pressing problem, he says.

As Covid-19 started to spread across the globe, the scope of the project expanded. What had originally started as a project looking just at spread within Wuhan, China grew to also include the spread in Italy, South Korea, and the United States.

The duo started modeling the spread of the virus in each of these four regions after the 500th case was recorded. That milestone marked a clear delineation in how different governments implemented quarantine orders.

Armed with precise data from each of these countries, the research team took the standard SEIR model and augmented it with a neural network that learns how infected individuals under quarantine impact the rate of infection. They trained the neural network through 500 iterations so it could then teach itself how to predict patterns in the infection spread.

Using this model, the research team was able to draw a direct correlation between quarantine measures and a reduction in the effective reproduction number of the virus.

The neural network is learning what we are calling the quarantine control strength function, explains Dandekar. In South Korea, where strong measures were implemented quickly, the quarantine control strength function has been effective in reducing the number of new infections. In the United States, where quarantine measures have been slowly rolled out since mid-March, it has been more difficult to stop the spread of the virus.

Predicting the plateau

As the number of cases in a particular country decreases, the forecasting model transitions from an exponential regime to a linear one. Italy began entering this linear regime in early April, with the U.S. not far behind it.

The machine learning algorithm Dandekar and Barbastathis have developed predictedthat the United States will start to shift from an exponential regime to a linear regime in the first week of April, with a stagnation in the infected case count likely betweenApril 15 and April20. It also suggests that the infection count will reach 600,000 in the United States before the rate of infection starts to stagnate.

This is a really crucial moment of time. If we relax quarantine measures, it could lead to disaster, says Barbastathis.

According to Barbastathis, one only has to look to Singapore to see the dangers that could stem from relaxing quarantine measures too quickly. While the team didnt study Singapores Covid-19 cases in their research, the second wave of infection this country is currently experiencing reflects their models finding about the correlation between quarantine measures and infection rate.

If the U.S. were to follow the same policy of relaxing quarantine measures too soon, we have predicted that the consequences would be far more catastrophic, Barbastathis adds.

The team plans to share the model with other researchers in the hopes that it can help inform Covid-19 quarantine strategies that can successfully slow the rate of infection.

Read the original post:
Model quantifies the impact of quarantine measures on Covid-19's spread - MIT News

Machine Learning as a Service (MLaaS) Market Significant Growth with Increasing Production to 2026 | Broadcom, EMC, GEMALTO – Cole of Duty

Futuristic Reports, The growth and development of Global Machine Learning as a Service (MLaaS) Market Report 2020 by Players, Regions, Type, and Application, forecast to 2026 provides industry analysis and forecast from 2020-2026. Global Machine Learning as a Service (MLaaS) Market analysis delivers important insights and provides a competitive and useful advantage to the pursuers. Machine Learning as a Service (MLaaS) processes, economic growth is analyzed as well. The data chart is also backed up by using statistical tools.

Simultaneously, we classify different Machine Learning as a Service (MLaaS) markets based on their definitions. Downstream consumers and upstream materials scrutiny are also carried out. Each segment includes an in-depth explanation of the factors that are useful to drive and restrain it.

Key Players Mentioned in the study are Broadcom, EMC, GEMALTO, SYMANTEC, VASCO DATA SECURITY INTERNATIONAL, AUTHENTIFY, ENTRUST DATACARD, SECUREAUTH, SECURENVOY, TELESIGN

For Better Understanding, Download FREE Sample Copy of Machine Learning as a Service (MLaaS) Market Report @ https://www.futuristicreports.com/request-sample/67627

Key Issues Addressed by Machine Learning as a Service (MLaaS) Market: It is very significant to have Machine Learning as a Service (MLaaS) segmentation analysis to figure out the essential factors of growth and development of the market in a particular sector. The Machine Learning as a Service (MLaaS) report offers well summarized and reliable information about every segment of growth, development, production, demand, types, application of the specific product which will be useful for players to focus and highlight on.

Businesses Segmentation of Machine Learning as a Service (MLaaS) Market:

On the basis on the applications, this report focuses on the status and Machine Learning as a Service (MLaaS) outlook for major applications/end users, sales volume, and growth rate for each application, including-

BFSI MarketMedical MarketThe IT MarketThe Retail MarketEntertainment MarketLogistics MarketOther

On the basis of types/products, this Machine Learning as a Service (MLaaS) report displays the revenue (Million USD), product price, market share, and growth rate of each type, split into-

Small And Medium-Sized EnterprisesBig Companies

Grab Best Discount on Machine Learning as a Service (MLaaS) Market Research Report [Single User | Multi User | Corporate Users] @ https://www.futuristicreports.com/check-discount/67627

NOTE : Our team is studying Covid-19 impact analysis on various industry verticals and Country Level impact for a better analysis of markets and industries. The 2020 latest edition of this report is entitled to provide additional commentary on latest scenario, economic slowdown and COVID-19 impact on overall industry. Further it will also provide qualitative information about when industry could come back on track and what possible measures industry players are taking to deal with current situation.

or

You just drop an Email to: [emailprotected] us if you are looking for any Economical analysis to shift towards the New Normal on any Country or Industry Verticals.

Machine Learning as a Service (MLaaS) Market Regional Analysis Includes:

Asia-Pacific (Vietnam, China, Malaysia, Japan, Philippines, Korea, Thailand, India, Indonesia, and Australia) Europe (Turkey, Germany, Russia UK, Italy, France, etc.) North America (the United States, Mexico, and Canada.) South America (Brazil etc.) The Middle East and Africa (GCC Countries and Egypt.)

Machine Learning as a Service (MLaaS) Insights that Study is going to provide:

Gain perceptive study of this current Machine Learning as a Service (MLaaS) sector and also possess a comprehension of the industry; Describe the Machine Learning as a Service (MLaaS) advancements, key issues, and methods to moderate the advancement threats; Competitors In this chapter, leading players are studied with respect to their company profile, product portfolio, capacity, price, cost, and revenue. A separate chapter on Machine Learning as a Service (MLaaS) market structure to gain insights on Leaders confrontational towards market [Merger and Acquisition / Recent Investment and Key Developments] Patent Analysis** Number of patents filed in recent years.

Table of Content:

Global Machine Learning as a Service (MLaaS) Market Size, Status and Forecast 20261. Market Introduction and Market Overview2. Industry Chain Analysis3. Machine Learning as a Service (MLaaS) Market, by Type4. Machine Learning as a Service (MLaaS) Market, by Application5. Production, Value ($) by Regions6. Production, Consumption, Export, Import by Regions (2016-2020)7. Market Status and SWOT Analysis by Regions (Sales Point)8. Competitive Landscape9. Analysis and Forecast by Type and Application10. Channel Analysis11. New Project Feasibility Analysis12. Market Forecast 2020-202613. Conclusion

Enquire More Before Buying @ https://www.futuristicreports.com/send-an-enquiry/67627

For More Information Kindly Contact:

Futuristic ReportsTel: +1-408-520-9037Media Release: https://www.futuristicreports.com/press-releases

Follow us on Blogger @ https://futuristicreports.blogspot.com/

Read the original here:
Machine Learning as a Service (MLaaS) Market Significant Growth with Increasing Production to 2026 | Broadcom, EMC, GEMALTO - Cole of Duty

Respond Software Unlocks the Value in EDR Data with Robotic Decision – AiThority

The Respond Analyst Simplifies Endpoint Analysis, Delivers Real-Time, Expert Diagnosis of Security Incidents at a Fraction of the Cost of Manual Monitoring and Investigation

Respond Software today announced analysis support of Endpoint Detection and Response (EDR) data from Carbon Black, CrowdStrike and SentinelOneby the Respond Analyst the virtual cybersecurity analyst for security operations. The Respond Analyst provides customers with expert EDR analysis right out of the box, creating immediate business value in security operations for organizations across industries.

The Respond Analyst provides a highly cost-effective and thorough way to analyze security-related alerts and data to free up people and budget from initial monitoring and investigative tasks. The software uses integrated reasoning decision-making that leverages multiple alerting telemetries, contextual sources and threat intelligence to actively monitor and triage security events in near real-time. Respond Software is now applying this unique approach to EDR data to reduce the number of false positives from noisy EDR feeds and turn transactional sensor data into actionable security insights.

Recommended AI News: 10 Tech Companies Donates Over $1.4bn to Fight Coronavirus

Mike Armistead, CEO and co-founder, Respond Software, said: As security teams increase investment in EDR capabilities, they not only must find and retain endpoint analysis capabilities but also sift through massive amounts of data to separate false positives from real security incidents. The Respond Analyst augments security personnel with our unique Robotic Decision Automation software that delivers thorough, consistent and 24x7x365 analysis of security data from network to endpoint saving budget and time for the security team. It derivesmaximum value from EDR at a level of speed and efficiency unmatched by any other solution today.

Jim Routh,head of enterprise information risk management,MassMutual, said:Data science is the foundation for MassMutuals cybersecurity program.Applying mathematics and machine learning models to security operations functions to improve productivity and analytic capability is an important part of this foundation.

Jon Davis, CEO of SecureNation, said:SecureNation has made a commitment to its customers to deliver the right technology that enables the right security automation at lower operating costs. The EDR skills enabled by the Respond Analyst will make it possible for SecureNation to continue to provide the most comprehensive, responsive managed detection and response service available to support the escalating needs of enterprises today and into the future.

Recommended AI News: Tech Taking Over Our Lives: Smart Phones and the Internet of Things (IoT)

EDR solutions capture and evaluate a broad spectrum of attacks spanning the MITRE ATT&CK Framework. These products often produce alerts with a high degree of uncertainty, requiring costly triage by skilled security analysts that can take five to 15 minutes on average to complete. A security analyst must pivot to piece together information from various security product consoles, generating multiple manual queries per system, process and account. The analyst must also conduct context and scoping queries. All this analysis requires deep expert system knowledge in order to isolate specific threats.

The Respond Analyst removes the need for multiple console interactions by automating the investigation, scoping and prioritization of alerts into real, actionable incidents. With the addition of EDR analysis, Respond Software broadens the integrated reasoning capabilities of the Respond Analyst to include endpoint system details identifying incidents related to suspect activity from binaries, client apps, PowerShell and other suspicious entities.

Combining EDR analysis with insights from network intrusion detection, web filtering and other network telemetries, the Respond Analyst extends its already comprehensive coverage. This allows security operations centers to increase visibility, efficiency and effectiveness, thereby reducing false positives and increasing the probability of identifying true malicious and actionable activity early in the attack cycle.

Recommended: AiThority Interview with Josh Poduska, Chief Data Scientist at Domino Data Lab

Read more from the original source:
Respond Software Unlocks the Value in EDR Data with Robotic Decision - AiThority