Cryptoverse: Shrimps and whales keep bitcoin afloat – Reuters.com

July 12 (Reuters) - The shrimps of the crypto world have joined the whales in a glorious last stand to banish the bleak bitcoin winter.

These two contrasting groups are both HODLers - investors in bitcoin as a long-term proposition who refuse to sell their holdings - and they are determined to drive back the bears, despite their portfolios being deep in the red.

Shrimps, investors that hold less than 1 bitcoin, are collectively adding to their balance at a rate of 60,460 bitcoin per month, the most aggressive rate in history, according to an analysis by data firm Glassnode.

Register

Whales, those with more than 1,000 bitcoin, were adding 140,000 coins per month, the highest rate since January 2021.

"The market is approaching a HODLer-led regime," Glassnode said in a note, referring to the cohort whose name emerged years ago from a trader misspelling "hold" on an online forum.

After bitcoin's worst month in 11 years in June, the decline appears to have abated as transaction demand seemed to be moving sideways, according to Glassnode, indicating a stagnation of new entrants and a probable retention of a base-load of users, ie HODLers.

Bitcoin has been hovering around $19,000 to $21,000 over the past four weeks, less than a third of its $69,000 peak in 2021.

"There is a saying in crypto markets - diamond hands. You've not really lost the money, if you've not pulled out. There may be a day it might come back up," said Neo, the online alias of a 26-year old graphic designer at a fintech company in Bangalore.

As the crypto bear market enters its eighth month, his crypto portfolio was down by 70% - though he said it was money he was "okay with losing". He does not intend to sell, holding out for a possible rebound in the coming years.

Like Neo, most HODLer portfolios are under water, yet many are refusing to bail.

Some 55% of U.S.-based crypto retail investors held their investments in response to the recent selloff, while around 16% of investors globally increased their crypto exposure in June, according a survey of retail investors by eToro.

"Crypto is an asset class disproportionately held by younger investors who are more risk tolerant since they have, say, 30 more years to earn it all back," said Ben Laidler, eToro's global markets strategist.

Another class of staunch crypto HODLers - bitcoin miners - is increasingly under pressure as they face the double whammy of cratering prices and high electricity costs. The cost of mining a bitcoin is higher than the digital assets' price for some miners, Citi analyst Joseph Ayoub said.

The unfavorable environment for many of these miners, who have loans against their mining systems, has forced them to pull from their stash. read more

Core Scientific (CORZ.O) sold 7,202 bitcoin last month to pay for its mining rigs and fund operations, bringing its total holdings down to 1,959 bitcoin.

While Marathon Digital Holdings (MARA.O) said it had not sold any bitcoin since October 2020, the firm said it may sell a portion of its monthly production to cover costs.

The Valkyrie bitcoin miners ETF (WGMI.O) slumped 65% last quarter, outpacing bitcoin's 56% fall.

Lessons from the crypto winter in 2018 were that the miners who survived were the ones that kept producing even if they were under water. That approach is unlikely to work this time round though, said Chris Bae, CEO of Enhanced Digital Group, which designs hedging strategies for crypto miners.

For the bosses of mining firms', Bae added, the focus is now on the "need to think through the next crypto winter and have that game plan before it happens rather than during it."

Register

Reporting by Medha Singh and Lisa Pauline Mattackal in BengaluruEditing by Vidya Ranganathan and Pravin Char

Our Standards: The Thomson Reuters Trust Principles.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias.

More:
Cryptoverse: Shrimps and whales keep bitcoin afloat - Reuters.com

For Bitcoin To Win, We Must Burn The Ships – Bitcoin Magazine

This is an opinion editorial by Interstellar Bitcoin, a contributor to Bitcoin Magazine.

Whether we like it or not, Bitcoiners still live in a world built on fiat currency. Fiat rules everything around us, from the food we eat to the houses we live in. Until we burn the ships, we are not prepared to realize our eventual victory.

In 1519, Hernn Corts led a Spanish army to modern-day Mexico to conquer the Aztec Empire. Upon landfall, two leaders mutinied to return to Cuba at the order of the governor who had commissioned the fleet Corts led. In response, Corts scuttled his fleet to forestall any future mutiny by closing the sole path of retreat.

Against all odds, Corts went on to defeat an opposing force of over 300,000 Aztecs, a few thousand Spaniards, superior military technology, an unforeseen smallpox outbreak, and shrewd political alliances ultimately prevailed.

Many of those on the expedition had never seen combat before, including Corts himself. Historians will point to August 13, 1521, as the final victory of the Spanish campaign against the Aztec Empire. However, Corts truly won the moment he burned the ships.

At its core, the metaphor of burning the ships represents the point of no return: the psychological commitment to crossing a line in the sand once and for all. Beyond this event horizon, there can be no hedging or looking over ones shoulder. From now on, everything all thoughts and efforts must be focused on succeeding in the new reality.

Like Corts, Bitcoiners have crossed the Atlantic to the promised land. However, while Bitcoiners still use fiat money, we will not be truly free. Until we burn the ships, we will not win.

Bitcoiners are the remnant. We lead by example. We must show the world we are not afraid to live on a bitcoin standard. We must use bitcoin not just as our store of value but as the unit of account and medium of exchange for our daily lives.

We must strive for peace and prosperity, by building circular bitcoin economies that remain resilient against the volatility of the fiat exchange rate. We must keep studying to build the knowledge and intellectual depth upon which rigorous discourse can thrive. We must build large stacks upon which generational wealth is built. In the end, only the strong survive.

There is a nascent movement in the Bitcoin cultural sphere known as #GetOnZero which polarizes many people. This movement represents burning the ships. This state change is both functional and psychological. It drives companies to build better products for Bitcoiners. It drives Bitcoiners to harden our resolve as Bitcoiners. It shows we are willing to go down with the ship. It proves we are fearless in the face of insurmountable odds.

Give me Bitcoin or give me death.

The critics will say its too early or point to statistics in an attempt to rationalize why holding some fiat currency is better. While such notions may seem correct on paper, in practice, until Bitcoiners take that grand leap of faith, we are not prepared to do what it takes to win. Until we are ready to completely let go of fiat currency, it will continue to culturally and functionally survive. Bitcoiners, like Corts, must embrace burning the ships. Once we do, the process of hyperbitcoinization already underway will rapidly accelerate.

The moment Bitcoiners burn the ships is the moment Bitcoiners win.

This is a guest post by Interstellar Bitcoin. Opinions expressed are entirely their own and do not necessarily reflect those of BTC, Inc. or Bitcoin Magazine.

More:
For Bitcoin To Win, We Must Burn The Ships - Bitcoin Magazine

As Bitcoin price falls, is cryptocurrency still worth buying? | Mint – Mint

The onslaught of crypto winter and recent events have marred the spirits of crypto investors. Various events like the recent breaks in operations where Vauld (a leading crypto exchange platform) paused the withdrawals and called off their operations, Voyager Digital (a crypto broker) filed for bankruptcy, the collapse of Luna crypto and many such cases across the world are shaking up the investors.

Archit Gupta, Founder & CEO Clear says the price of Bitcoin, the first and most prominent crypto, rose to $68,000 in November 2021. Shortly after, it nearly halved in price to $35,000 and continued to decline. Today it stands at around $21,000. This tells us of the volatility and speculations in the crypto markets. Given the macroeconomic environment, market volatility, and mass exodus of investors from the market, the scales of demand and supply are heavily tipped, accelerating the risk even further.

To top it all, the new tax rules add to the woes of the investors. The government announced that 1% TDS must be deducted on all crypto transfers over 10,000. These tax rules will increase the regulatory and compliance burden. The tax rules have further increased the challenges as they may lock up the required liquidity to revive crypto markets," said Archit Gupta.

He added that given how people invest in crypto with little knowledge and more influence, one must appreciate these regulations as they will only help secure investors money.

Vikas Singhania, CEO, TradeSmart says apart from TDS, the brokerage, and GST charges have added more risk to trading in cryptocurrencies.

The TDS of one percent on Cryptocurrency implemented from 1st July is a dampener for trading in the asset class. While it may not affect investing volumes, trading volume in the sector will be surely hit. Just an example of how it will impact the trader -If a trader takes 10 trades in a month, he will have to earn at least 10 percent on these trades cumulatively, just to recover the TDS cost," said Singhania.

"On top of it, the brokerage, and GST charges have added more risk to trading in cryptocurrencies. Whatever residual profits are left will now be subjected to capital gains and other charges, making a profitable living off cryptocurrencies more difficult for investors," he said.

Meanwhile, Bitcoin-the world's largest and most popular cryptocurrency- was trading at $19,925, down more than 3%. Bitcoin is more likely to tumble to $10,000, cutting its value roughly in half, than it is to rally back to $30,000, according to 60% of the 950 investors who responded to the latest MLIV Pulse survey. Forty percent saw it going the other way. Bitcoin has already lost more than two-thirds of its value since hitting nearly $69,000 in November and hasnt traded as low as $10,000 since September 2020.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Read the original:
As Bitcoin price falls, is cryptocurrency still worth buying? | Mint - Mint

Jarren Duran moved to right field with Jackie Bradley Jr. in center as Boston Red Sox look to avoid sweep Thu – MassLive.com

ST. PETERSBURG, Fla. -- For the first time since June 4, Jarren Duran is playing in right field for the Red Sox on Thursday night.

Boston moved Duran to right and put Jackie Bradley Jr. in center for its series finale against the Rays at Tropicana Field. All season, it has been Duran in center with Bradley manning right field. But with an eye on defense, manager Alex Cora decided to make the switch as the Red Sox look to avoid being swept in a four-game series.

I think here, its very spacious and Jackies one of the best defenders in the big leagues, Cora said. He showed it yesterday. Put the kid (Duran) in right field. He should be good. One thing about him, he has been really good to his right so far since he got called up. Im not saying hes struggled to his left but he has been better to his right.

Cora said the Red Sox would do the same thing in New York, where theyll start a three-game series Friday night. Bradley is a far superior defender to Duran, who has mostly played in center throughout his minor league career.

Jeter Downs is back at second base in place of Trevor Story, who is out for the second straight night with a right hand contusion. Downs is batting ninth behind Bradley. Kevin Plawecki is catching righty Kutter Crawford and Franchy Cordero is at first base.

First pitch is scheduled for 7:10 p.m. ET.

FIRST PITCH: 7:10 p.m. ET

TV CHANNEL: NESN, MLB Network

LIVE STREAM: fuboTV - If you have cable and live in the New England TV market, you can use your login credentials to watch via NESN on mobile and WiFi-enabled devices. If you dont have cable, you can watch the game via fuboTV, in New England | Watch NESN Live

RADIO: WEEI 93.7 FM

PITCHING PROBABLES: RHP Kutter Crawford (2-2, 4.50 ERA) vs. RHP Drew Rasmussen (5-3, 3.11 ERA)

RED SOX LINEUP:

1. RF Jarren Duran

2. 3B Rafael Devers

3. DH J.D. Martinez

4. SS Xander Bogaerts

5. LF Alex Verdugo

6. 1B Franchy Cordero

7. C Kevin Plawecki

8. CF Jackie Bradley Jr.

9. 2B Jeter Downs

RAYS LINEUP:

1. 3B Yandy Daz

2. 1B Ji-Man Choi

3. DH Harold Ramrez

4. 2B Jonathan Aranda

5. C Christian Bethancourt

6. RF Josh Lowe

7. SS Taylor Walls

8. LF Luke Raley

9. CF Brett Phillips

Related links:

David Ortiz urges Red Sox to sign Rafael Devers, Xander Bogaerts: they represent Boston better than anyone else, we have to lock them in

Boston Red Sox promote former first-round pick Jay Groome to Triple-A Worcester

Could Boston Red Sox trade a lefty reliever? Examining the case to deal Austin Davis or Josh Taylor before Aug. 2 deadline

Here is the original post:
Jarren Duran moved to right field with Jackie Bradley Jr. in center as Boston Red Sox look to avoid sweep Thu - MassLive.com

Machine learning begins to understand the human gut – University of Michigan News

The robot in the Venturelli Lab that creates the microbial communities used to train and test the algorithms. Image courtesy: Venturelli Lab

Study: Recurrent neural networks enable design of multifunctional synthetic human gut microbiome dynamics (DOI: 10.7554/eLife.73870)

The communities formed by human gut microbes can now be predicted more accurately with a new computer model developed in a collaboration between biologists and engineers, led by the University of Michigan and the University of Wisconsin.

The making of the model also suggests a route toward scaling from the 25 microbe species explored to the thousands that may be present in human digestive systems.

Whenever we increase the number of species, we get an exponential increase in the number of possible communities, said Alfred Hero, the John H. Holland Distinguished University Professor of Electrical Engineering and Computer Science at the University of Michigan and co-corresponding author of the study in the journal eLife.

Thats why its so important that we can extrapolate from the data collected on a few hundred communities to predict the behaviors of the millions of communities we havent seen.

While research continues to unveil the multifaceted ways that microbial communities influence human health, probiotics often dont live up to the hype. We dont have a good way of predicting how the introduction of one strain will affect the existing community. But machine learning, an approach to artificial intelligence in which algorithms learn to make predictions based on data sets, could help change that.

Problems of this scale required a complete overhaul in terms of how we model community behavior, said Mayank Baranwal, adjunct professor of systems and control engineering at the Indian Institute of Technology, Bombay, and co-first author of the study.

He explained that the new algorithm could map out the entire landscape of 33 million possible communities in minutes, compared to the days to months needed for conventional ecological models.

Integral to this major step was Ophelia Venturelli, assistant professor of biochemistry at the University of Wisconsin and co-corresponding author of the study. Venturellis lab runs experiments with microbial communities, keeping them in low-oxygen environments that mimic the environment of the mammalian gut.

Her team created hundreds of different communities with microbes that are prevalent in the human large intestine, emulating the healthy state of the gut microbiome. They then measured how these communities evolved over time and the concentrations of key health-relevant metabolites, or chemicals produced as the microbes break down foods.

Metabolites are produced in very high concentrations in the intestines, Venturelli said. Some are beneficial to the host, like butyrate. Others have more complex interactions with the host and gut community.

The machine learning model enabled the team to design communities with desired metabolite profiles. This sort of control may eventually help doctors discover ways to treat or protect against diseases by introducing the right microbes.

While human gut microbiome research has a long way to go before it can offer this kind of intervention, the approach developed by the team could help get there faster. Machine learning algorithms often are produced with a two step process: accumulate the training data, and then train the algorithm. But the feedback step added by Hero and Venturellis team provides a template for rapidly improving future models.

Heros team initially trained the machine learning algorithm on an existing data set from the Venturelli lab. The team then used the algorithm to predict the evolution and metabolite profiles of new communities that Venturellis team constructed and tested in the lab. While the model performed very well overall, some of the predictions identified weaknesses in the model performance, which Venturellis team shored up with a second round of experiments, closing the feedback loop.

This new modeling approach, coupled with the speed at which we could test new communities in the Venturelli lab, could enable the design of useful microbial communities, said Ryan Clark, co-first author of the study, who was a postdoctoral researcher in Venturellis lab when he ran the microbial experiments. It was much easier to optimize for the production of multiple metabolites at once.

The group settled on a long short-term memory neural network for the machine learning algorithm, which is good for sequence prediction problems. However, like most machine learning models, the model itself is a black box. To figure out what factors went into its predictions, the team used the mathematical map produced by the trained algorithm. It revealed how each kind of microbe affected the abundance of the others and what kinds of metabolites it supported. They could then use these relationships to design communities worth exploring through the model and in follow-up experiments.

The model can also be applied to different microbial communities beyond medicine, including accelerating the breakdown of plastics and other materials for environmental cleanup, production of valuable compounds for bioenergy applications, or improving plant growth.

This study was supported by the Army Research Office and the National Institutes of Health.

Hero is also the R. Jamison and Betty Williams Professor of Engineering, and a professor of biomedical engineering and statistics. Venturelli is also a professor of bacteriology and chemical and biological engineering. Clark is now a senior scientist at Nimble Therapeutics. Baranwal is also a scientist in the division of data and decision sciences at Tata Consultancy Services Research and Innovation.

Read this article:
Machine learning begins to understand the human gut - University of Michigan News

Using machine learning to assess the impact of deep trade agreements | VOX, CEPR Policy Portal – voxeu.org

Holger Breinlich, valentina corradi, Nadia Rocha, Joo M.C. Santos Silva, Thomas Zylkin 08 July 2022

Preferential trade agreements (PTAs) have become more frequent and increasingly complex in recent decades, making it important to assess how they impact trade and economic activity. Modern PTAs contain a host of provisions besides tariff reductions in areas as diverse as services trade, competition policy, or public procurement. To illustrate this proliferation of non-tariff provisions, Figure 1 shows the share of PTAs in force and notified to the WTO up to 2017 that cover selected policy areas. More than 40% of the agreements include provisions such as investment, movement of capital and technical barriers to trade. And more than two-thirds of agreements cover areas such as competition policy or trade facilitation.

Figure 1 Share of PTAs that cover selected policy areas

Note: Figure shows the share of PTAs that cover a policy area. Source: Hofmann, Osnago and Ruta (2019).

Recent research has tried to move beyond estimating the overall impact of PTAs on trade and tried to establish the relative importance of individual PTA provisions (e.g. Kohl et al. 2016, Mulabdic et al. 2017, Dhingra et al. 2018, Regmi and Baier 2020). However, such attempts face the difficulty that the number of provisions included in PTAs is very large compared to the number of PTAs available to study (see Figure 2), making it difficult to separate their individual impacts on trade flows.

Figure 2 The number of provisions in PTAs over time

Source: Mattoo et al. (2020).

Researchers have tried to address the growing complexity of PTAs in different ways. For example, Mattoo et al. (2017) use the count of provisions in an agreement as a measure of its depth and check whether the increase in trade flows after a given PTA is related to this measure. Dhingra et al. (2018) group provisions into categories (such as services, investment, and competition provisions) and examine the effect of these provision bundles on trade flows. Obviously, these approaches come at the cost of not allowing the identification of the effect of individual provisions within each group.

In recent research (Breinlich et al. 2022), we instead adopt a technique from the machine learning literature the least absolute shrinkage and selection operator (lasso) to the context of selecting the most important provisions and quantifying their impact. More precisely, we adapt the rigorous lasso method of Belloni et al. (2016) to the estimation of state-of-the-art gravity models for trade (e.g. Yotov et al. 2016, Weidner and Zylkin 2021).1

Unlike traditional estimation methods such as least squares and the maximum likelihood that are based on optimising the in-sample fit of the estimated model, lasso balances in-sample fit with parsimony to optimise the out-of-sample fit and to simultaneously select the more important regressors and estimate their effect on trade flows. In our context, the lasso works by shrinking the effects of individual provisions towards zero and progressively removing those that do not have a significant impact on the fit of the model (for an intuitive description, see Breinlich et al. 2021; for more details, see Breinlich et al. 2022). The rigorous lasso of Belloni et al. (2016), a relatively recent variant of the lasso, refines this approach by taking into account the idiosyncratic variance of the data and by only keeping variables that are found to have a statistically large impact on the fit of the model.

Because the rigorous lasso tends to favour very parsimonious models, it may miss some important provisions. To address this issue, we introduce two methods to identify potentially important provisions that may have been missed by the rigorous lasso. One of the methods, which we call iceberg lasso, involves regressing each of the provisions selected by the rigorous lasso on all other provisions, with the purpose of identifying relevant variables that were initially missed due to their collinearity with the provisions selected in the initial step. The other method, termed bootstrap lasso, augments the set of variables selected by the plug-in lasso with the variables selected when the rigorous lasso is bootstrapped.

We use the World Bank's database on deep trade agreements, where we observe 283 PTAs and 305 essential provisions grouped into the 17 categories detailed in Figure 1.2The rigorous lasso selects eight provisions more strongly associated with increasing trade flows following the implementation of the respective PTAs. As detailed in Table 1, these provisions are in the areas of anti-dumping, competition policy, technical barriers to trade, and trade facilitation.

Table 1 Provisions selected by the rigorous lasso

Building on these results, the iceberg lasso procedure identifies a set of 42 provisions, and the bootstrap lasso identifies between 30 and 74 provisions that may impact trade, depending on how it is implemented. Therefore, the iceberg lasso and bootstrap lasso methods select sets of provisions that are small enough to be interpretable and large enough to give us some confidence that they include the more relevant provisions. In contrast, the more traditional implementation of the lasso based on cross-validation selects 133 provisions.

Reassuringly, both the iceberg lasso and bootstrap lasso select similar sets of provisions, mainly related to anti-dumping, competition policy, subsidies, technical barriers to trade, and trade facilitation. Therefore, although our results do not have a causal interpretation and, consequently, we cannot be certain of exactly which provisions are more important, we can be reasonably confident that provisions in these areas stand out as having a positive effect on trade.

Besides identifying the set of provisions that are more likely to have an impact on trade, our methods also provide an estimate of the increase in trade flows associated with the selected provisions. We use these results to estimate the effects of different PTAs that have already been implemented. Table 2 summarises the estimated effects for selected PTAs obtained using the different methods we introduce. As, for example, in Baier et al. (2017 and 2019), we find a wide variety of effects, ranging from very large impacts in agreements that include many of the selected provisions to no effect at all in agreements that do not include any.3

Table 2 also shows that different methods can lead to substantially different estimates, and therefore these results need to be interpreted with caution. As noted above, our results do not have a causal interpretation. Therefore the accuracy of the predicted effects of individual PTAs will depend on whether the selected provisions have a causal impact on trade or serve as a signal of the presence of provisions that have a causal effect. When this condition holds, the predictions based on this method are likely to be reasonably accurate, and in Breinlich et al. (2022), we report simulation results suggesting that this is the case. However, it is possible to envision scenarios where predictions based on our methods fail dramatically; for example, it could be the case that a PTA is incorrectly measured to have zero impact despite having many of the true causal provisions. Finally, we note that our results can also be used to predict the effects of new PTAs, but the same caveats apply.

Table 2 Partial effects for selected PTAs estimated by different methods

We have presented results from an ongoing research project in which we have developed new methods to estimate the impact of individual PTA provisions on trade flows. By adapting techniques from the machine learning literature, we have developed data-driven methods to select the most important provisions and quantify their impact on trade flows. While our approach cannot fully resolve the fundamental problem of identifying the provisions with a causal impact on trade, we were able to make considerable progress. In particular, our results show that provisions related to anti-dumping, competition policy, subsidies, technical barriers to trade, and trade facilitation procedures are likely to enhance the trade-increasing effect of PTAs. Building on these results, we were able to estimate the effects of individual PTAs.

Authors note: This column updates and extends Breinlich et al. (2021). See alsoFernandes et al. (2021).

Baier, S L, Y V Yotov and T Zylkin (2017), "One size does not fit all: On the heterogeneous impact of free trade agreements", VoxEU.org, 28 April.

Baier, S L, Y V Yotov and T Zylkin (2019), "On the Widely Differing Effects of Free Trade Agreements: Lessons from Twenty Years of Trade Integration", Journal of International Economics 116: 206-228.

Belloni, A, V Chernozhukov, C Hansen and D Kozbur (2016), "Inference in High Dimensional Panel Models with an Application to Gun Control", Journal of Business & Economic Statistics 34: 590-605.

Breinlich, H, V Corradi, N Rocha, M Ruta, J M C Santos Silva and T Zylkin (2021), "Using Machine Learning to Assess the Impact of Deep Trade Agreements", in A M Fernandes, N Rocha and M Ruta (eds), The Economics of Deep Trade Agreements, CEPR Press.

Breinlich, H, V Corradi, N Rocha, M Ruta, J M C Santos Silva and T Zylkin (2022), "Machine Learning in International Trade Research - Evaluating the Impact of Trade Agreements", CEPR Discussion paper 17325.

Dhingra, S, R Freeman and E Mavroeidi (2018), Beyond tariff reductions: What extra boost to trade from agreement provisions?, LSE Centre for Economic Performance Discussion Paper 1532.

Fernandes, A, N Rocha and M Ruta (2021), The Economics of Deep Trade Agreements: A New eBook, VoxEU.org, 23 June.

Hofmann, C, A Osnago and M Ruta (2019), "The Content of Preferential Trade Agreements", World Trade Review 18(3): 365-398.

Kohl, T S. Brakman and H. Garretsen (2016), "Do trade agreements stimulate international trade differently? Evidence from 296 trade agreements", The World Economy 39: 97-131.

Mattoo, A, A Mulabdic and M Ruta (2017), "Trade creation and trade diversion in deep agreements", Policy Research Working Paper Series 8206, World Bank, Washington, DC.

Mattoo, A, N Rocha and M Ruta (2020), Handbook of Deep Trade Agreements, Washington, DC: World Bank.

Mulabdic, A, A Osnago and M Ruta (2017), "Deep integration and UK-EU trade relations," World Bank Policy Research Working Paper Series 7947.

Regmi, N and S Baier (2020), "Using Machine Learning Methods to Capture Heterogeneity in Free Trade Agreements," mimeograph.

Weidner, M, T Zylkin (2021), "Bias and Consistency in Three-Way Gravity Models," Journal of International Economics: 103513.

Yotov, Y V, R Piermartini, J A Monteiro and M Larch (2016), An advanced guide to trade policy analysis: The structural gravity model, Geneva: World Trade Organization.

1 Our approach complements the one adopted by Regmi and Baier (2020), who use machine learning tools to construct groups of provisions and then use these clusters in a gravity equation. The main difference between the two approaches is that Regmi and Baier (2020) use what is called an unsupervised machine learning method, which uses only information on the provisions to form the clusters. In contrast, we select the provisions using a supervised method that also considers the impact of the provisions on trade.

2Essential provisions in PTAs include the set of substantive provisions (those that require specific integration/liberalisation commitments and obligations) plus the disciplines among procedures, transparency, enforcement or objectives, which are required to achieve the substantive commitments (Mattoo et al. 2020).

3It is worth noting that lasso based on the traditional cross-validation approach leads toextremely dispersedestimations of trade effects, with some of them being clearly implausible. This further illustrates the superiority of the methods we propose.

Here is the original post:
Using machine learning to assess the impact of deep trade agreements | VOX, CEPR Policy Portal - voxeu.org

Podcast: Why Deep Learning Could Expedite the Next AI Winter Machine Learning Times – The Machine Learning Times

Welcome to the next episode ofThe Machine Learning TimesExecutive Editor Eric Siegels podcast,The Doctor Data Show. Click here for all episodes and links to listen on your preferred platform. Why Deep Learning Could Expedite the Next AI Winter Podcast episode description: Deep learning, the most important advancement in machine learning, could inadvertently expedite the next AI winter. The problem is that, although it increases value and capabilities, it may also be having the effect of increasing hype even more. This episode covers four reasons deep learning increases the hype-to-value ratio of machine learning. About the Author Eric

This content is restricted to site members. If you are an existing user, please log in on the right (desktop) or below (mobile). If not, register today and gain free access to original content and industry news. See the details here.

Go here to see the original:
Podcast: Why Deep Learning Could Expedite the Next AI Winter Machine Learning Times - The Machine Learning Times

Reforming Prior Authorization with AI and Machine Learning – insideBIGDATA

Healthcare providers are growing increasingly more comfortable with using AI-enabled software to improve patient care, from analyzing medical imaging to managing chronic diseases. While health plans have been slower to adopt AI and machine learning (ML), many are beginning to rely on these technologies in administrative areas such as claims management, and 62% of payers rank improving their AI/ML capabilities as an extremely high priority.

The process by which health plans manage the cost of members benefits is especially ripe for technological innovation. Health plans often require providers to obtain advance approval, or prior authorization (PA), for a wide range of procedures, services, and medications. The heavily manual PA process drives unnecessary resource cost and delays in care, which can lead to serious adverse events for patients.

In recent years, there has been an emphasis on reducing the administrative burden of PAs via digitization. Some health plans are moving beyond automation by leveraging AI and ML technologies to redefine the care experience, helping their members receive evidence-based, high-value care as quickly as possible. These technologies are able to streamline the administrative tasks of PA while continually refining customized, patient-specific care paths to drive better outcomes, ease provider friction, and accelerate patient access.

Providing clinical context for PA requests

Traditionally, PA requests are one-off transactions, disconnected from the patients longitudinal history. Physicians enter the requested clinical information, which is already captured in the electronic health record (EHR), into the health plans PA portal and await approval or denial. Although FHIR standards have provided new interoperability for the exchange of clinical data, these integrations are rarely sufficient to complete a PA request, as much of the pertinent information resides in unstructured clinical notes.

Using natural language processing, ML models can automatically extract this patient-specific data from the EHR, providing the health plan with a more complete patient record. By using ML and interoperability to survey the patients unique clinical history, health plans can better contextualize PA requests in light of the patients past and ongoing treatment.

Anticipating the entire episode of care

An AI-driven authorization process can also identify episode-based care paths based on the patients diagnosis, suggesting additional services that might be appropriate for a bundled authorization. Instead of submitting separate PAs for the same patient, physicians can submit a consolidated authorization for multiple services across a single episode of care, receiving up-front approval.

Extracted clinical data can also help health plans develop more precise adjudication rules for these episode-based care paths. Health plans can create patient sub-populations that share clinical characteristics, enabling the direct comparison of patient cohorts in various treatment contexts. As patient data is collected, applied ML algorithms can identify the best outcomes for specific clinical scenarios. Over time, an intelligent authorization platform can aggregate real-world data to test and refine condition-specific care paths for a wide range of patient populations.

Influencing care choices to improve outcomes

Health plans can also use AI to encourage physicians to make the most clinically appropriate, high-value care decisions. As a PA request is entered, ML models can evaluate both the completeness and the appropriateness of the provided information in real time. For example, an ML model might detect that a physician has neglected to provide imaging records within the clinical notes, triggering an automated prompt for that data.

An ML model can also detect when the providers PA request deviates from best practices, triggering a recommendation for an alternative care choice. For example, an intelligent authorization platform might suggest that a physician select an outpatient setting instead of an inpatient setting based on the type of procedure and the clinical evidence. By using AI to help physicians build a more clinically appropriate case, health plans can reduce denials and decrease unnecessary medical expenses, while also improving patient outcomes.

Of course, for these clinical recommendations to be accepted by physicians, health plans must provide greater transparency into the criteria they use. While 98% of health plans attest that they use peer-reviewed, evidence-based criteria to evaluate PA requests, 30% of physicians believe that PA criteria are rarely or never evidence-based. To win physician trust, health plans that use technology to provide automatically generated care recommendations must also provide full transparency into the evidence behind their medical necessity criteria.

Prioritizing cases for faster clinical review

Finally, the application of advanced analytics and ML can help health plans drive better PA auto-determination rates by identifying which requests require a clinical review and which do not. This technology can also help case managers prioritize their workload, as it enables the flagging of high-impact cases as well as cases which are less likely to impact patient outcomes or medical spend.

Using a health plans specific policy guidelines, an intelligent authorization platform can use ML and natural language processing to detect evidence that the criteria has been met, linking relevant text within the clinical notes to the plans policy documentation. Reviewers can quickly pinpoint the correct area of focus within the case, speeding their assessment.

The application of AI and ML to the onerous PA process can relieve both physicians and health plans of the repetitive, manual administrative work involved in submitting and reviewing these requests. Most importantly, these intelligent technologies transform PA from a largely bureaucratic exercise into a process that is capable of ensuring that patients receive the highest quality of care, as quickly and painlessly as possible.

About the Author

Niall OConnor is the chief technology officer at Cohere Health, a utilization management technology company that aligns patients, physicians, and health plans on evidence-based treatment plans at the point of diagnosis.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Read the original:
Reforming Prior Authorization with AI and Machine Learning - insideBIGDATA

Speed-up hyperparameter tuning in deep learning with Keras hyperband tuner – Analytics India Magazine

The performance of machine learning algorithms is heavily dependent on selecting a good collection of hyperparameters. The Keras Tuner is a package that assists you in selecting the best set of hyperparameters for your application. The process of finding the optimal collection of hyperparameters for your machine learning or deep learning application is known as hyperparameter tuning. Hyperband is a framework for tuning hyperparameters which helps in speeding up the hyperparameter tuning process. This article will be focused on understanding the hyperband framework. Following are the topics to be covered in this article.

Hyperparameters are not model parameters and cannot be learned directly from data. When we optimize a loss function with something like gradient descent, we learn model parameters during training. Lets talk about Hyperband and try to understand the need for its creation.

The approach of tweaking hyperparameters of machine learning algorithms is known as hyperparameter optimization (HPO). Excellent machine learning algorithms feature various, diverse, and complicated hyperparameters that produce a massive search space. Deep learning is used as the basis of many start-up processes, and the search space for deep learning methods is considerably broader than for typical ML algorithms. Tuning on a large search space is a difficult task. Data-driven strategies must be used to tackle HPO difficulties. Manual approaches do not work.

Are you looking for a complete repository of Python libraries used in data science,check out here.

By defining hyperparameter optimization as a pure-exploration adaptive resource allocation issue addressing how to distribute resources among randomly chosen hyperparameter configurations, a novel configuration assessment technique was devised. This is known as a Hyperband setup. It allocates resources using a logical early-stopping technique, allowing it to test orders of magnitude more configurations than black-box processes such as Bayesian optimization methods. Unlike previous configuration assessment methodologies, Hyperband is a general-purpose tool that makes few assumptions.

The capacity of Hyperband to adapt to unknown convergence rates and the behaviour of validation losses as a function of the hyperparameters was proved by the developers in the theoretical study. Furthermore, for a range of deep-learning and kernel-based learning issues, Hyperband is 5 to 30 times quicker than typical Bayesian optimization techniques. In the non-stochastic environment, Hyperband is one solution with properties similar to the pure-exploration, infinite-armed bandit issue.

Hyperparameters is input to a machine learning algorithm that governs the performance generalization of the algorithm to unseen data. Due to the growing number of tuning parameters associated with these models are difficult to set by standard optimization techniques.

In an effort to develop more efficient search methods, Bayesian optimization approaches that focus on optimizing hyperparameter configuration selection have lately dominated the subject of hyperparameter optimization. By picking configurations in an adaptive way, these approaches seek to discover good configurations faster than typical baselines such as random search. These approaches, however, address the fundamentally difficult problem of fitting and optimizing a high-dimensional, non-convex function with uncertain smoothness and perhaps noisy evaluations.

The goal of an orthogonal approach to hyperparameter optimization is to accelerate configuration evaluation. These methods are computationally adaptive, providing greater resources to promising hyperparameter combinations while swiftly removing bad ones. The size of the training set, the number of features, or the number of iterations for iterative algorithms are all examples of resources.

These techniques seek to analyze orders of magnitude more hyperparameter configurations than approaches that evenly train all configurations to completion, hence discovering appropriate hyperparameters rapidly. The hyperband is designed to accelerate the random search by providing a simple and theoretically sound starting point.

Hyperband calls the SuccessiveHalving technique introduced for hyperparameter optimization a subroutine and enhances it. The original Successive Halving method is named from the theory behind it: uniformly distribute a budget to a collection of hyperparameter configurations, evaluate the performance of all configurations, discard the worst half, and repeat until only one configuration remains. More promising combinations receive exponentially more resources from the algorithm.

The Hyperband algorithm is made up of two parts.

Each loop that executes the SuccessiveHalving within Hyperband is referred to as a bracket. Each bracket is intended to consume a portion of the entire resource budget and corresponds to a distinct tradeoff between n and B/n. As a result, a single Hyperband execution has a limited budget. Two inputs are required for hyperband.

The two inputs determine how many distinct brackets are examined; particularly, various configuration settings. Hyperband starts with the most aggressive bracket, which configures configuration to maximize exploration while requiring that at least one configuration be allotted R resources. Each consecutive bracket decreases the number of configurations by a factor until the last bracket, which allocates resources to all configurations. As a result, Hyperband does a geometric search in the average budget per configuration, eliminating the requirement to choose the number of configurations for a set budget at a certain cost.

Since the arms are autonomous and sampled at random, the hyperband has the potential to be parallelized. The simplest basic parallelization approach is to distribute individual Successive Halving brackets to separate computers. With this article, we have understood bandit-based hyperparameter tuning algorithm and its variation from bayesian optimization.

Original post:
Speed-up hyperparameter tuning in deep learning with Keras hyperband tuner - Analytics India Magazine

In Ukraine, machine-learning algorithms and big data scans used to identify war-damaged infrastructure – United Nations Development Programme

The dynamics during a crisis can quickly change, requiring critical information to inform decision-making in a timely fashion. If the information is too little, it is usually of no use. If there is too much it may need extensive resources and timely processing to generate actionable insights.

In Ukraine, identifying the size, type and scope of damaged infrastructure is essential for determining locations and people in need, and to inform the necessary allocation of resources needed for rebuilding. Inquiries about the date, time, location, cause as well as type of damage are generally part of such an assessment. At times obtaining the most accurate and timely information can be a challenge.

To help address this issue, the UNDP Country Office in Ukraine is developing a model that uses machine learning and natural language processing techniques to analyse thousands of reports and extract the most relevant information in time to inform strategic decisions.

Classifying key infrastructure

Text mining is a common data science technique; the added value of this model is its customized ability to analyse text from report narratives and then classify them into key infrastructure types. The process relies on ACLED, an open-source database that collates global real-time data. For the pilot testing of its infrastructure assessment model, UNDP used roughly 8,727 reports on military attacks and subsequent events, time-stamped between 24 February and 24 June 2022 (the first four months of the war).

Particularly absent from its database was a taxonomy to categorize the broad range of infrastructure references. Such classification saves time with information processing and can help narrow the scope of assessment if there are specific areas of interest and priorities.

Using its combined experiential knowledge from other crisis zones, it developed a unique model to classify the range of damaged infrastructure into nine categories: industrial, logistics, power/electricity, telecom, agriculture, health, education, shelter and businesses.

If a report, for example, indicated that a residential building in Kyiv was destroyed by military action, the model would classify the reported event in the most appropriate category - in this case, shelter.

The mechanics of the model

A set of relevant keywords was chosen for each of the nine infrastructure types. Th keywords were then compared to the text in the reports. Both the keywords (used to represent a particular type of infrastructure) and the reports were transformed into a numbered vector, whereby each type of infrastructure had one vector, and each report had one vector.

The main goal was to measure the similarity between the two numbers known as cosine similarity: the shorter the distance between a report and an infrastructure type, the higher the semantic relationship between them.

These examples further illustrate the approach:

Text: On 26 February 2022, a bridge was blown up near village of Stoyanka, Kyiv.

The model indicated a valid 34 percent similarity with the Logistics classification.

Text: On 19 May 2022, a farmer on a tractor hit a mine near Mazhuhivka village, Chernihiv region as a result of which he suffered a leg injury.

The model indicated a valid 32 percent similarity with the Agriculture classification.

A minimum threshold of 18 percent was set to determine the validity of the semantic relationship between an infrastructure type and a report. Both examples meet this threshold.

Besides pairing a report with its corresponding infrastructure type, the model by default has also helped with identifying actors involved, time, specific location and reason related to each infrastructure damage. These attributes are already included in ACLED, but the direct association between the report and an infrastructure type ensures the translation of these basic information into more actionable insights.

The snapshot below is a data visualization of the model in action, showing the geographical distribution of the infrastructure damage by type that can be further mapped to understand the causes and actors involved. These insights also play a crucial role in designing response strategies, particularly with respect to the safety and security of the assessment team on the ground.

Replicating the model for different contexts

The utility of this Machine Learning model extends beyond classifying infrastructure types. It can be leveraged in broader humanitarian and development contexts. For this reason, the UNDP Country Office in Ukraine is already replicating the model using more real-time and varied data obtained from Twitter to conduct sentiment analysis and better understand the needs and concerns of affected groups.

The traditional way of manually processing information is not only labour-intensive, but it may fall short of delivering the timely insights needed for informed decision-making, especially given the volume of digital information available nowadays. As the war in Ukraine highlights, being able to uncover timely insights means saving lives.

As an alternative, this model offers speed and efficiency, which can help reduce operational costs in several situations. UNDPs Decision Support Unit, which coordinates assessments internally and in collaboration with a range of partners, is supporting the development of this model. The Infrastructure Semantic Damage Detector is publicly accessible on tinyurl.com/semdam.

For more information, contact Aladdin ataladdin.shamoug@undp.org.

See the original post:
In Ukraine, machine-learning algorithms and big data scans used to identify war-damaged infrastructure - United Nations Development Programme