Coronavirus tests the value of artificial intelligence in medicine – FierceBiotech

Albert Hsiao, M.D., and his colleagues at the University of California, San Diego (USCD) health system had been working for 18 months on anartificial intelligence programdesigned to help doctors identify pneumonia on a chest X-ray. When thecoronavirushit the U.S., they decided to see what it could do.

The researchers quickly deployed the application, which dots X-ray images with spots of color where there may be lung damage or other signs of pneumonia. It has now been applied to more than 6,000 chest X-rays, and its providing some value in diagnosis, said Hsiao, director of UCSDs augmented imaging and artificial intelligence data analytics laboratory.

His team is one of several around the country that has pushed AI programs developed in a calmer time into the COVID-19 crisis to perform tasks like deciding which patients face the greatest risk of complications and which can be safely channeled into lower-intensity care.

ASCO Explained: Expert predictions and takeaways from the world's biggest cancer meeting

Join FiercePharma for our ASCO pre- and post-show webinar series. We'll bring together a panel of experts to preview what to watch for at ASCO. Cancer experts will highlight closely watched data sets to be unveiled at the virtual meeting--and discuss how they could change prescribing patterns. Following the meeting, well do a post-show wrap up to break down the biggest data that came out over the weekend, as well as the implications they could have for prescribers, patients and drugmakers.

The machine-learning programs scroll through millions of pieces of data to detect patterns that may be hard for clinicians to discern. Yet few of the algorithms have been rigorously tested against standard procedures. So while they often appear helpful, rolling out the programs in the midst of a pandemic could be confusing to doctors or even dangerous for patients, some AI experts warn.

AI is being used for things that are questionable right now, said Eric Topol, M.D., director of the Scripps Research Translational Institute and author of several books on health IT.

Topol singled out a system created byEpic, a major vendor of electronic health record software, that predicts which coronavirus patients may become critically ill. Using the tool before it has been validated is pandemic exceptionalism, he said.

Epic said the companys model had been validated with data from more 16,000 hospitalized COVID-19 patients in 21 healthcare organizations. No research on the tool has been published, but, in any case, it was developed to help clinicians make treatment decisions and is not a substitute for their judgment, said James Hickman, a software developer on Epics cognitive computing team.

Others see the COVID-19 crisis as an opportunity to learn about the value of AI tools.

My intuition is its a little bit of the good, bad and ugly, said Eric Perakslis, Ph.D., a data science fellow at Duke University and former chief information officer at the FDA. Research in this setting is important.

Nearly $2 billion poured into companies touting advancements in healthcare AI in 2019. Investments in the first quarter of 2020 totaled $635 million, up from $155 million in the first quarter of 2019, according to digital health technology funderRock Health.

At least three healthcare AI technology companies have made funding deals specific to the COVID-19 crisis, includingVida Diagnostics, an AI-powered lung-imaging analysis company, according to Rock Health.

Overall, AIs implementation in everyday clinical care is less common than hype over the technology would suggest. Yet the coronavirus crisis has inspired some hospital systems to accelerate promising applications.

UCSD sped up its AI imaging project, rolling it out in only two weeks.

Hsiaos project, with research funding from Amazon Web Services, the UC system and the National Science Foundation (NSF), runs every chest X-ray taken at its hospital through an AI algorithm. While no data on the implementation have been published yet, doctors report that the tool influences their clinical decision-making about a third of the time, said Christopher Longhurst, M.D., UCSD Healths chief information officer.

The results to date are very encouraging, and were not seeing any unintended consequences, he said. Anecdotally, were feeling like its helpful, not hurtful.

AI has advanced further in imaging than other areas of clinical medicine because radiological images have tons of data for algorithms to process, and more data make the programs more effective, said Longhurst.

But while AI specialists have tried to get AI to do things like predict sepsis and acute respiratory distressresearchers at Johns Hopkins Universityrecently won a NSF grantto use it to predict heart damage in COVID-19 patientsit has been easier to plug it into less risky areas such as hospital logistics.

In New York City, two major hospital systems are using AI-enabled algorithms to help them decide when and how patients should move into another phase of care or be sent home.

AtMount Sinai Health System, an artificial intelligence algorithm pinpoints which patients might be ready to be discharged from the hospital within 72 hours, said Robbie Freeman, vice president of clinical innovation at Mount Sinai.

Freeman described the AIs suggestion as a conversation starter, meant to help assist clinicians working on patient cases decide what to do. AI isnt making the decisions.

NYU Langone Healthhas developed a similar AI model. It predicts whether a COVID-19 patient entering the hospital will suffer adverse events within the next four days, said Yindalon Aphinyanaphongs, M.D., Ph.D., who leads NYU Langones predictive analytics team.

The model will be run in a four- to six-week trial with patients randomized into two groups: one whose doctors will receive the alerts, and another whose doctors will not. The algorithm should help doctors generate a list of things that may predict whether patients are at risk for complications after theyre admitted to the hospital, Aphinyanaphongs said.

Some health systems are leery of rolling out a technology that requires clinical validation in the middle of a pandemic. Others say they didnt need AI to deal with the coronavirus.

Stanford Health Careis not using AI to manage hospitalized patients with COVID-19, saidRon Li, M.D., the centers medical informatics director for AI clinical integration. The San Francisco Bay Areahasnt seen the expected surge of patientswho would have provided the mass of data needed to make sure AI works on a population, he said.

Outside the hospital, AI-enabled risk factor modeling is being used to help health systems track patients who arent infected with the coronavirus but might be susceptible to complications if they contract COVID-19.

At Scripps Health in San Diego, clinicians are stratifying patients to assess their risk of getting COVID-19 and experiencing severe symptoms using a risk-scoring model that considers factors like age, chronic conditions and recent hospital visits. When a patient scores seven or higher, a triage nurse reaches out with information about the coronavirus and may schedule an appointment.

Though emergencies provide unique opportunities to try out advanced tools, its essential for health systems to ensure doctors are comfortable with them and to use the tools cautiously, with extensive testing and validation, Topol said.

When people are in the heat of battle and overstretched, it would be great to have an algorithm to support them, he said. We just have to make sure the algorithm and the AI tool isnt misleading, because lives are at stake here.

Kaiser Health News(KHN) is a national health policy news service. It is an editorially independent program of theHenry J. Kaiser Family Foundationwhich is not affiliated with Kaiser Permanente.

ThisKHNstory first published onCalifornia Healthline, a service of theCalifornia Health Care Foundation

Continue reading here:
Coronavirus tests the value of artificial intelligence in medicine - FierceBiotech

Playing God: Why artificial intelligence is hopelessly biased – and always will be – TechRadar India

Much has been said about the potential of artificial intelligence (AI) to transform many aspects of business and society for the better. In the opposite corner, science fiction has the doomsday narrative covered handily.

To ensure AI products function as their developers intend - and to avoid a HAL9000 or Skynet-style scenario - the common narrative suggests that data used as part of the machine learning (ML) process must be carefully curated, to minimise the chances the product inherits harmful attributes.

According to Richard Tomsett, AI Researcher at IBM Research Europe, our AI systems are only as good as the data we put into them. As AI becomes increasingly ubiquitous in all aspects of our lives, ensuring were developing and training these systems with data that is fair, interpretable and unbiased is critical.

Left unchecked, the influence of undetected bias could also expand rapidly as appetite for AI products accelerates, especially if the means of auditing underlying data sets remain inconsistent and unregulated.

However, while the issues that could arise from biased AI decision making - such as prejudicial recruitment or unjust incarceration - are clear, the problem itself is far from black and white.

Questions surrounding AI bias are impossible to disentangle from complex and wide-ranging issues such as the right to data privacy, gender and race politics, historical tradition and human nature - all of which must be unraveled and brought into consideration.

Meanwhile, questions over who is responsible for establishing the definition of bias and who is tasked with policing that standard (and then policing the police) serve to further muddy the waters.

The scale and complexity of the problem more than justifies doubts over the viability of the quest to cleanse AI of partiality, however noble it may be.

Algorithmic bias can be described as any instance in which discriminatory decisions are reached by an AI model that aspires to impartiality. Its causes lie primarily in prejudices (however minor) found within the vast data sets used to train machine learning (ML) models, which act as the fuel for decision making.

Biases underpinning AI decision making could have real-life consequences for both businesses and individuals, ranging from the trivial to the hugely significant.

For example, a model responsible for predicting demand for a particular product, but fed data relating to only a single demographic, could plausibly generate decisions that lead to the loss of vast sums in potential revenue.

Equally, from a human perspective, a program tasked with assessing requests for parole or generating quotes for life insurance plans could cause significant damage if skewed by an inherited prejudice against a certain minority group.

According to Jack Vernon, Senior Research Analyst at IDC, the discovery of bias within an AI product can, in some circumstances, render it completely unfit for purpose.

Issues arise when algorithms derive biases that are problematic or unintentional. There are two usual sources of unwanted biases: data and the algorithm itself, he told TechRadar Pro via email.

Data issues are self-explanatory enough, in that if features of a data set used to train an algorithm have problematic underlying trends, there's a strong chance the algorithm will pick up and reinforce these trends.

Algorithms can also develop their own unwanted biases by mistake...Famously, an algorithm for identifying polar bears and brown bears had to be discarded after it was discovered the algorithm based its classification on whether there was snow on the ground or not, and didn't focus on the bear's features at all.

Vernons example illustrates the eccentric ways in which an algorithm can diverge from its intended purpose - and its this semi-autonomy that can pose a threat, if a problem goes undiagnosed.

The greatest issue with algorithmic bias is its tendency to compound already entrenched disadvantages. In other words, bias in an AI product is unlikely to result in a white-collar banker having their credit card application rejected erroneously, but may play a role in a member of another demographic (which has historically had a greater proportion of applications rejected) suffering the same indignity.

The consensus among the experts we consulted for this piece is that, in order to create the least prejudiced AI possible, a team made up of the most diverse group of individuals should take part in its creation, using data from the deepest and most varied range of sources.

The technology sector, however, has a long-standing and well-documented issue with diversity where both gender and race are concerned.

In the UK, only 22% of directors at technology firms are women - a proportion that has remained practically unchanged for the last two decades. Meanwhile, only 19% of the overall technology workforce are female, far from the 49% that would accurately represent the ratio of female to male workers in the UK.

Among big tech, meanwhile, the representation of minority groups has also seen little progress. Google and Microsoft are industry behemoths in the context of AI development, but the percentage of black and Latin American employees at both firms remains miniscule.

According to figures from 2019, only 3% of Googles 100,000+ employees were Latin American and 2% were black - both figures up by only 1% over 2014. Microsofts record is only marginally better, with 5% of its workforce made up of Latin Americans and 3% black employees in 2018.

The adoption of AI in enterprise, on the other hand, skyrocketed during a similar period according to analyst firm Gartner, increasing by 270% between 2015-2019. The clamour for AI products, then, could be said to be far greater than the commitment to ensuring their quality.

Patrick Smith, CTO at data storage firm PureStorage, believes businesses owe it not just to those that could be affected by bias to address the diversity issue, but also to themselves.

Organisations across the board are at risk of holding themselves back from innovation if they only recruit in their own image. Building a diversified recruitment strategy, and thus a diversified employee base, is essential for AI because it allows organisations to have a greater chance of identifying blind spots that you wouldnt be able to see if you had a homogenous workforce, he said.

So diversity and the health of an organisation relates specifically to diversity within AI, as it allows them to address unconscious biases that otherwise could go unnoticed.

Further, questions over precisely how diversity is measured add another layer of complexity. Should a diverse data set afford each race and gender equal representation, or should representation of minorities in a global data set reflect the proportions of each found in the world population?

In other words, should data sets feeding globally applicable models contain information relating to an equal number of Africans, Asians, Americans and Europeans, or should they represent greater numbers of Asians than any other group?

The same question can be raised with gender, because the world contains 105 men for every 100 women at birth.

The challenge facing those whose goal it is to develop AI that is sufficiently impartial (or perhaps proportionally impartial) is the challenge facing societies across the globe. How can we ensure all parties are not only represented, but heard - and when historical precedent is working all the while to undermine the endeavor?

The importance of feeding the right data into ML systems is clear, correlating directly with AIs ability to generate useful insights. But identifying the right versus wrong data (or good versus bad) is far from simple.

As Tomsett explains, data can be biased in a variety of ways: the data collection process could result in badly sampled, unrepresentative data; labels applied to the data through past decisions or human labellers may be biased; or inherent structural biases that we do not want to propagate may be present in the data.

Many AI systems will continue to be trained using bad data, making this an ongoing problem that can result in groups being put at a systemic disadvantage, he added.

It would be logical to assume that removing data types that could possibly inform prejudices - such as age, ethnicity or sexual orientation - might go some way to solving the problem. However, auxiliary or adjacent information held within a data set can also serve to skew output.

An individuals postcode, for example, might reveal much about their characteristics or identity. This auxiliary data could be used by the AI product as a proxy for the primary data, resulting in the same level of discrimination.

Further complicating matters, there are instances in which bias in an AI product is actively desirable. For example, if using AI to recruit for a role that demands a certain level of physical strength - such as firefighter - it is sensible to discriminate in favor of male applicants, because biology dictates the average male is physically stronger than the average female. In this instance, the data set feeding the AI product is indisputably biased, but appropriately so.

This level of depth and complexity makes auditing for bias, identifying its source and grading data sets a monumentally challenging task.

To tackle the issue of bad data, researchers have toyed with the idea of bias bounties, similar in style to bug bounties used by cybersecurity vendors to weed out imperfections in their services. However, this model operates on the assumption an individual is equipped to to recognize bias against any other demographic than their own - a question worthy of a whole separate debate.

Another compromise could be found in the notion of Explainable AI (XAI), which dictates that developers of AI algorithms must be able to explain in granular detail the process that leads to any given decision generated by their AI model.

Explainable AI is fast becoming one of the most important topics in the AI space, and part of its focus is on auditing data before its used to train models, explained Vernon.

The capability of AI explainability tools can help us understand how algorithms have come to a particular decision, which should give us an indication of whether biases the algorithm is following are problematic or not.

Transparency, it seems, could be the first step on the road to addressing the issue of unwanted bias. If were unable to prevent AI from discriminating, the hope is we can at least recognise discrimination has taken place.

The perpetuation of existing algorithmic bias is another problem that bears thinking about. How many tools currently in circulation are fueled by significant but undetected bias? And how many of these programs might be used as the foundation for future projects?

When developing a piece of software, its common practice for developers to draw from a library of existing code, which saves time and allows them to embed pre-prepared functionalities into their applications.

The problem, in the context of AI bias, is that the practice could serve to extend the influence of bias, hiding away in the nooks and crannies of vast code libraries and data sets.

Hypothetically, if a particularly popular piece of open source code were to exhibit bias against a particular demographic, its possible the same discriminatory inclination could embed itself at the heart of many other products, unbeknownst to their developers.

According to Kacper Bazyliski, AI Team Leader at software development firm Neoteric, it is relatively common for code to be reused across multiple development projects, depending on their nature and scope.

If two AI projects are similar, they often share some common steps, at least in data pre- and post-processing. Then its pretty common to transplant code from one project to another to speed up the development process, he said.

Sharing highly biased open source data sets for ML training makes it possible that the bias finds its way into future products. Its a task for the AI development teams to prevent from happening.

Further, Bazyliski notes that its not uncommon for developers to have limited visibility into the kinds of data going into their products.

In some projects, developers have full visibility over the data set, but its quite often that some data has to be anonymized or some features stored in data are not described because of confidentiality, he noted.

This isnt to say code libraries are inherently bad - they are no doubt a boon for the worlds developers - but their potential to contribute to the perpetuation of bias is clear.

Against this backdrop, it would be a serious mistake to...conclude that technology itself is neutral, reads a blog post from Google-owned AI firm DeepMind.

Even when bias does not originate with software developers, it is still repackaged and amplified by the creation of new products, leading to new opportunities for harm.

Bias is an inherently loaded term, carrying with it a host of negative baggage. But it is possible bias is more fundamental to the way we operate than we might like to think - inextricable from the human character and therefore anything we produce.

According to Alexander Linder, VP Analyst at Gartner, the pursuit of impartial AI is misguided and impractical, by virtue of this very human paradox.

Bias cannot ever be totally removed. Even the attempt to remove bias creates bias of its own - its a myth to even try to achieve a bias-free world, he told TechRadar Pro.

Tomsett, meanwhile, strikes a slightly more optimistic note, but also gestures towards the futility of an aspiration to total impartiality.

Because there are different kinds of bias and it is impossible to minimize all kinds simultaneously, this will always be a trade-off. The best approach will have to be decided on a case by case basis, by carefully considering the potential harms from using the algorithm to make decisions, he explained.

Machine learning, by nature, is a form of statistical discrimination: we train machine learning models to make decisions (to discriminate between options) based on past data.

The attempt to rid decision making of bias, then, runs at odds with the very mechanism humans use to make decisions in the first place. Without a measure of bias, AI cannot be mobilised to work for us.

It would be patently absurd to suggest AI bias is not a problem worth paying attention to, given the obvious ramifications. But, on the other hand, the notion of a perfectly balanced data set, capable of rinsing all discrimination from algorithmic decision-making, seems little more than an abstract ideal.

Life, ultimately, is too messy. Perfectly egalitarian AI is unachievable, not because its a problem that requires too much effort to solve, but because the very definition of the problem is in constant flux.

The conception of bias varies in line with changes to societal, individual and cultural preference - and it is impossible to develop AI systems within a vacuum, at a remove from these complexities.

To be able to recognize biased decision making and mitigate its damaging effects is critical, but to eliminate bias is unnatural - and impossible.

More here:
Playing God: Why artificial intelligence is hopelessly biased - and always will be - TechRadar India

Harness artificial intelligence and take control your health – Newswise

Newswise Sedentary behaviours, poor sleep and questionable food choices are major contributors of chronic disease, including diabetes, anxiety, heart disease and many cancers. But what if we could prevent these through the power of smart technologies?

In a new University of South Australia research project announced today and funded by $1,118,593 from the Medical Research Future Fund (MRFF), researchers will help Australians tackle chronic disease through a range of digital technologies to improve their health.

Using apps, wearables, social media and artificial intelligence, the research will show whether technology can modify and improve peoples behaviours to create meaningful and lasting lifestyle changes that can ward off chronic disease.

Chronic disease is the leading cause of illness, disability and death in Australia with about half of Australians having a least one of eight major conditions including CVD, cancer, arthritis, asthma, back pain, diabetes, pulmonary disease and mental health conditions.

Nearly 40 per cent of chronic disease is preventable through modifiable lifestyle and diet factors.

The research will assess the ability of digital technologies to improve the health and wellbeing across a range of populations, health behaviours and outcomes, with a specific focus on how they can negate poor health outcomes associated high-risk events such as school holidays or Christmas (when people are more likely to indulge and less likely to exercise); where technology could better track the activity among hospital inpatients, outpatients and home-patients (to help recovery from illness and surgery, leading to improved patient outcomes); and how new artificial intelligence-driven virtual health assistants can improve boost health among high-risk groups, such as older adults.

Lead researcher, UniSAs Associate Professor Carol Maher says the research aims to deliver accessible and affordable health solutions for all Australians.

Poor lifestyle patterns a lack of exercise, excess sedentary behaviour, a lack of sleep and poor diets are leading modifiable causes of death and disease in Australia, Assoc Prof Maher says.

Technology has a huge amount to offer in terms of improving lifestyle and health, especially in terms of personalisation and accessibility, but it has to be done thoroughly and it has to be done well.

Research plays an important role in helping understand the products that are most effective, which will see us working with existing commercial technologies and applying and testing them in a new way, as well as developing bespoke software for specific, unmet needs.

The great advantage of technology-delivered programs is that with careful design, once they are developed and evaluated, they can be delivered very affordably and on a massive scale.

If we are to make any change in the prevalence of chronic disease in Australia, we must plan to do it en masse.

The research aims to bridge the gap between academic rigour and commercial offerings so ensure that every Australian has access to the health supports they need.

One of the challenges we face is that many people who could benefit from digital health technologies are intimidated by them for example, older adults who are not that comfortable with technology, or health professionals who are just used to doing things a certain way, Assoc Prof Maher says.

Change can be hard, but when were making leaps in the right direction to improve lifestyle and health of the Australian community, these changes are worth considering.

See the article here:
Harness artificial intelligence and take control your health - Newswise

Pentagon AI chief says the tech could help spot future pandemics earlier – Roll Call

The command is responsible for defending the continental United States and territories and provides military aid to non-military agencies such as the Federal Emergency Management Agency.

The command has deployed Army and Navy medical personnel to New York, sent Navy hospital ships to New York City and Los Angeles, and helped set up field hospitals in areas where local health care facilities were overwhelmed with patients.

Northern Command has said it has been working with several top U.S. technology companies including Apple, Microsoft, mapping software maker Esri and Monkton, a company that helps developers build secure apps for classified purposes to help FEMA and other agencies during the pandemic.

Although the Pentagon has faced skepticism from some tech companies in its pursuit of artificial intelligence technologies,and outright refusal by Google to continue collaborating on a Pentagon project to identify and label objects in drone videos,the pandemic appears to have changed the calculation, Shanahan said.

As soon as the Pentagon launched Project Salus to build predictive models of shortages during the pandemic, there has been an outpouring of support from private companies, as well as major universities, Shanahan said. The top tech companies in the country and their teams of artificial intelligence and machine learning specialists have shown a strong desire to work with the Defense Department, he said.

See the rest here:
Pentagon AI chief says the tech could help spot future pandemics earlier - Roll Call

BMW is using Artificial Intelligence to paint its cars for a perfect result – Hindustan Times

Artificial intelligence can bring even greater precision to controlling highly sensitive systems in automotive production, as a pilot project in the paint shop of the BMW Group's Munich plant has demonstrated.

Despite state-of-the-art filtration technology, the content of finest dust particles in paint lines varies depending on the ambient air drawn in. If the dust content exceeded the threshold, the still wet paint could trap particles, thus visually impairing the painted surface.

Artificial Intelligence (AI) specialists from central planning and the Munich plant have now found a way to avoid this situation altogether. Every freshly painted car body must undergo an automatic surface inspection in the paint shop. Data gathered in these inspections are used to develop a comprehensive database for dust particle analysis. The specialists are now applying AI algorithms to compare live data from dust particle sensors in the paint booths and dryers with this database.

"Data-based solutions help us secure and further extend our stringent quality requirements to the benefit of our customers. Smart data analytics and AI serve as key decision-making aids for our team when it comes to developing process improvements. We have filed for several patents relating to this innovative dust particle analysis technology," said Albin Dirndorfer, Senior Vice President Painted Body, Finish and Surface at the BMW Group.

(Also read: Ford is working on a car paint that can protect your vehicle from bird poop)

Two specific examples show the benefits of this new AI solution: Where dust levels are set to rise owing to the season or during prolonged dry periods, the algorithm can detect this trend in good time and is able to determine, for example, an earlier time for filter replacement.

Additional patterns can be detected where this algorithm is used alongside other analytical tools. For example, analysis could further show that the facility that uses ostrich feathers to remove dust particles from car bodies needs to be fine-tuned.

The BMW Group's AI specialists see enormous potential in dust particle analysis. Based on information from numerous sensors and data from surface inspections, the algorithm monitors over 160 features relating to the car body and is able to predict the quality of paint application very accurately.

This AI solution will be suitable for application in series production when an even broader database for the algorithm has been developed. In particular, this requires additional measuring points and even more precise sensor data for the car body cleaning stations. The AI experts are confident that once the pilot project at the parent plant in Munich has been completed, it will be possible to launch dust particle analysis also at other vehicle plants.

See the article here:
BMW is using Artificial Intelligence to paint its cars for a perfect result - Hindustan Times

UM partners with artificial intelligence leader Atomwise to pursue COVID-19 therapies – UM Today

May 22, 2020

Two University of Manitoba researchers have received support from Atomwise, the leader in using artificial intelligence (AI) for small molecule drug discovery, to explore broad-spectrum therapies for COVID-19 and other coronaviruses.

Jorg Stetefeld: It is crucial to gain a molecular understanding of how one particularly attractive protein target, nsp12, interacts with another key protein named nsp8. Once learned, this knowledge can be used to develop both new drugs and repurpose existing ones.

Faculty of Science professor Jrg Stetefeld (chemistry), Tier-1 Canada Research Chair in Structural Biology and Biophysics, and associate professor Mark Fry (biological sciences) received support through Atomwises Artificial Intelligence Molecular Screen (AIMS) awards program, which seeks to democratize access to AI for drug discovery and enable researchers to accelerate the translation of their research into novel therapies.

The current pandemic of COVID-19 is caused by a novel virus strain of SARS-CoV-2, says Stetefeld. To develop the most efficient therapeutic strategies to counteract the SARS-CoV-2 infection, it is crucial to gain a molecular understanding of how one particularly attractive protein target, nsp12, interacts with another key protein named nsp8. Once learned, this knowledge can be used to develop both new drugs and repurpose existing ones.

Professro Ben Bailey-Elkin, from the Stetefeld laboratory, will test compounds that Atomwises AI team sends him after they perform an in silico screen of millions of compounds, and carry out the subsequent biochemical and biophysical characterization, significantly reducing the time it would traditionally take to carry out this process. The Atomwise team will use their proprietary AI software to search for promising direct-acting antivirals, which interfere with the function of the viruss targeted proteins.

Professor Frys laboratory will take advantage of Atomwises cutting edge AI to screen a panel of small molecules predicted to interfere with the cellular signaling pathway that is central to the cytokine storm associated with the development of the COVID-19 acute respiratory distress syndrome.

Professor Frys laboratory will take advantage of Atomwises cutting edge AI to screen a panel of small molecules predicted to interfere with the cellular signaling pathway that is central to the cytokine storm.

Cytokines are a group of small proteins secreted by cells for the purpose of cell-to-cell communication, and in healthy individuals, these cytokines regulate key activities such as immunity, cell growth and tissue repair, for example, says Fry. A large number of patients with COVID-19 will develop life threatening pneumonia, accompanied by a so-called cytokine storm where the body experiences excessive or uncontrolled release of a number of these molecules.

Fry adds, The cytokine storm is thought to play a major role in the development of COVID-19, and there is some evidence that drugs which inhibit key cytokines such as interleukin-6 may reduce the severity of the disease. Its important to note that many of these inhibitors are part of a therapeutic class called biological drugs. These can be expensive to make and supply may be limited. My hope is that we can develop a small molecule inhibitor of the cytokine storm that will be easy to synthesize and available to all who need it.

Atomwises patented AI technology has been proven in hundreds of projects to discover drug leads for a wide variety of diseases said Dr. Stacie Calad-Thomson, vice president and head of Artificial Intelligence Molecular Screen (AIMS) Partnerships at Atomwise. Were hopeful that the therapies discovered will not only target this pandemic, but potential future pandemics.

Research at the University of Manitoba is partially supported by funding from the Government of Canada Research Support Fund.

UM Today Staff

Visit link:
UM partners with artificial intelligence leader Atomwise to pursue COVID-19 therapies - UM Today

Importance and Benefits of Artificial intelligence for Patent Searching – Express Computer

Authored by Amit Aggarwal, co-founder and Director, Effectual Services

Every year with the growth in new technologies and inventions there have been an astounding growth in volume of intellectual property literature. Internationally, this data has to be gathered, stored, and classified in multiple formats and languages so that it can be used as and when required. However, data alone does not create a competitive advantage, extracting significant and actionable information from this data deluge represents a major challenge and an opportunity at the same time. Analysing patent documents from the pile of data manually is getting out of question day by day as it demands extensive time and resources. So, the examiners and patent analyst need all available tools at their disposal to perform this tedious task. One of the tools with a tremendous potential is Artificial Intelligence (AI). At its core, artificial intelligence is a computer that has been programmed to mimic the natural intelligence of human beings by learning, reasoning and making decisions.

From the days of fully constructed Boolean searches, search and analytics have evolved, thanks to AI-based semantic search algorithms to provide more efficient and accurate search result than ever before. A major advantage of artificial intelligence is its ability to provide repeated results as these systems are not hindered by inexperience or fatigue. Artificial intelligence tools have potential to significantly streamline and automate the patent search process and the increase the quality and speed of theobtaining results by reducing the amount of time examiners and analyst spend researching, for example,a prior art research project that can runs into days and weeks, can be performed by an AI tool in a matter of hours. Some existing tools, that are really advanced, also incorporate natural language based input that permitsa searcher to include natural language terms that can be comprehended by the backend artificial intelligence engine, which recovers comparable documents available in different languages.

The European Patent Office (EPO) uses Intelligent machine translation tool Patent Translate to allow for translation of patent publications from 32 languages into the EPO official languages of English, French and German. The US patent office (USPTO)uses artificial intelligence to help examiners to review pending patent applications by augmenting classification and searches currently a high priority with it. The UK patent office (UKIPO) also uses artificial intelligence solutions for prior art searching. IBM is offering Watson, an IP advisor that leverages artificial intelligence for fast patent ingestion, better insights, and analytics. Turbopatent, a company that develops applications to automate and streamline the patent protection process, has introduced two artificial intelligence products for patent lawyers. Roboreview, a cloud-based product that analyses drafted patent applications and rapid response, a product that assists lawyers in writing responses to office actions.

Many key players in the industry like PatSeer, Questel, have been using artificial intelligence in combination with machine learning & semantic-based algorithms to provide patent analytics tools and software.With the help of these tools and software we can now:

There are some opposing views relating to the implementation and benefit of artificial intelligence tools and techniques there are people who are concerned about the peculiarities of language used within patent documents, and doubt that how these tools can deal with the inherent ambiguities i.e.its lack of human reasoning as it is unable to carry out a sanity check of results or inventions and lacks the experience that leads to a persons intuitive response to situations.There have been some recorded incidents where the AI based tools failed to perform what it was intended to do.

All in all, its difficult to say whether the AI based tools will be able to completely mimic the human beings and perform same level of analysis or whether they will only reach to the extent of an additional help to a patent searcher we will see in coming times.

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

Read more from the original source:
Importance and Benefits of Artificial intelligence for Patent Searching - Express Computer

Julian Assange’s lawyer says he secretly fathered two …

WikiLeaks founder Julian Assange secretly fathered two children with one of his lawyers while he was holed up at the Ecuadorean embassy in London and fighting extradition to the US, according to a report.

Stella Morris, a South African-born lawyer, began a relationship with Assange, 48, in 2015, she told the Daily Mail Saturday of the couples secret romance.

The couples first son, Gabriel, was born in 2017 said Morris, 37.

Their second child, Max, was born last year. Both births were filmed with a GoPro camera and the footage sent to Assange, the Mail reported.

The couple managed to keep their relationship and the birth of the children secret from the Ecuadorean staff and diplomats who had given Assange refuge at the embassy for seven years, the newspaper reported.

I love Julian deeply, and I am looking forward to marrying him, Morris told the Mail.

In an even odder twist, British rapper M.I.A. is a godmother to the children, who are both British citizens, the Mail reported.

The pair first met over a cup of tea in London in 2011, when her friend, Jennifer Robinson, who was working as Assanges lawyer, introduced them.

Morris, who spent time in Sweden as a child, spoke Swedish fluently and helped defend him against 2010 rape allegations in that country, charges that were later dropped.

By the time Assange fathered his first child with Morris, he had been holed up in the embassy for four years.

At the beginning, it was a working relationship. I was in the embassy every day and Julian became a friend, she said.

Over the years he went from being a person I enjoyed seeing to the man I wanted to see most in the world.

His public image is not what I fell in love with, its the real person behind it, she gushed.

They even planned to try to marry in the embassy after he picked out a diamond ring for her online.

At the time that we started trying for a baby, it seemed that life was set to change for the better for Julian, she said.

Assange has met both children though the eldest had to be sneaked into the embassy at just 1 week old for the introduction to his father.

A friend carried the baby inside, pretending he was hers.

Assange also has an older son from a prior relationship.

Assange, who was transferred to a high-security prison in England last year, is wanted in the US on espionage charges for leaking thousands of US intelligence documents.

The couple, who were engaged in 2017, believe that US intelligence operatives tried to steal the DNA from one of Gabriels diapers when they became suspicious that Assange was the father.

The Mail learned of Assanges secret family though details revealed in court papers filed in his extradition case.

At one time, Australian-born Assange was monitored 24 hours a day by Scotland Yard, but the exorbitant cost of more than $16 million prompted such a backlash, the police detail was scrapped.

In April 2019 Assange was moved to high-security Belmarsh Prison where he now fears for his life due to the coronavirus pandemic and has sought release on bail.

Morris fears she will lose her love to the virus sweeping the globe.

I am now terrified I will not see him alive again, she said. Julian has been fiercely protective of me and has done his best to shield me from the nightmares of his life.

I have lived quietly and privately, raising Gabriel and Max on my own and longing for the day we could be together as a family. Now I have to speak out because I can see his life is on the brink.

More here:
Julian Assange's lawyer says he secretly fathered two ...

I was told to stop Julian Assange if he tried to flee: on the beat with the UKs volunteer police – The Guardian

On a warm Saturday night in September last year, a man calls 999 to report that somebody has hit him in the face with a glass bottle outside a pub in west London. Special inspector Anthony Kay speeds to the scene in a police van, sirens blaring. As he and several other officers arrive at the pub, the injured man begins swearing at them, threatening to throw his alleged attacker into a nearby canal.

To most observers, the team of six constables in attendance would look completely ordinary, with batons, handcuffs and incapacitant spray attached to their belts. But, despite having the same uniform and powers as regular police, none of them are employed as officers. Kay, 40, is a full-time computer programmer working for a City law firm; Jamie is a recent university graduate; Silvia is a cost analyst; and Tusalan an airport security manager. The team of volunteers also includes a makeup artist and a construction worker who dont want to be named.

For their eight-hour shift, which lasts until 4am on Sunday, the volunteers hurry to reported home invasions, hunt for drug dealers and escort assault victims to hospital. One minute they caution a man they find smoking weed who is in possession of a suspected uninsured Mercedes (the smell of his confiscated drugs fills the police van for the rest of the shift); the next they drive to a street brawl. Be aware, if there are a lot of them, they will fight us, says Jamie, who began volunteering in 2016 and will soon become a full-time officer. Earlier in the evening, he had told colleagues that he was hoping for a foot chase: I want a burglar tonight.

***

Kay and his team are among around 10,000 special constables the official name for Britains volunteer police spread across frontline policing, taking on vital duties to an extent that would surprise most members of the public. (Volunteer police are not to be confused with community support officers; the latter are employed police assistants who, unlike volunteers, arent fully sworn constables and cant arrest people.) Special constables havent been this needed for decades: last year the number of full-time officers dropped by 12% in England and Wales, to 128,149. Meanwhile, knife crime in England and Wales rose by 7% last year to the highest levels since records began in 2011.

The commitment made last summer by the prime minister, Boris Johnson, to begin replacing the 20,000 regular officers lost in Britain over the past decade is likely to have a limited longterm impact on the need for volunteers. Full-time police take time to train, while Britains population has grown by 4 million since 2010, and officers work has increased phenomenally, according to chief officer John Conway of the Metropolitan polices volunteer service (the countrys largest). I cant see a reduction in policing demand any time soon, he says, citing rises in violent crime, terrorism threats and fraud. Sometimes, if there is not a special constable there, crime is not going to get policed, one long-serving London special, who did not want to be named, told me.

As coronavirus has swept across Britain, special constables have played a central role in enforcing the nationwide lockdown and social distancing rules, as well as responding to emergencies. This spring they have been out in droves patrolling parks and cities, confiscating alcohol and sending rule-breakers home. Theyve also made arrests for serious crimes, including domestic abuse, violent burglaries and kidnappings. Four hundred specials were part of a recent operation to seize knives across London during lockdown. Meanwhile, police chiefs are asking businesses to give paid leave to employees who volunteer as specials, amid fears the virus will affect swathes of frontline officers; they also worry that demand will surge as the lockdown eases.

In normal times, specials police prominent events, including the state opening of parliament, and protests by groups such as Extinction Rebellion. They are called to the same crimes as regular officers, and patrol our streets, rivers, royal palaces and airports, either by themselves or alongside full-timers. Nowadays, we are putting them into [999] response cars on their first shift, one volunteer in Kays district tells me, calling the initiation a baptism of fire, after 23 days of training. Specials must commit to a minimum of 16 hours a month, but many give significantly more time, volunteering on nights, weekends and days off. In return, they get travel and refreshments expenses, as well as free use of public transport. Although many stay for two years or less, some specials volunteer for decades.

Kay hadnt even heard of specials until he was violently assaulted 18 years ago and a volunteer took his statement. The father of two has since policed large demonstrations, gone to the aid of a stabbing victim and subdued a violent bodybuilder. He recently began working alongside the criminal investigation department as part of his roughly 40 hours monthly volunteering. When you compare policing with what I do in my day job, sitting in front of a computer watching a cursor flashing, it is just on a different planet, he tells me, as other specials on his team speak to a man with face wounds lying in an alleyway. You are dealing with real problems, not corporate bean-counting. (Kay has since moved on from his role as a omputer programmer to a new role, consulting for a legal intelligence firm.)

The volunteer police service is now facing a major challenge: its own numbers are plummeting, by more than 30% nationally in the past four years, which senior officers attribute to reduced budgets for advertising and training, and departures to the regular police who havent been replaced. In a 2016 national survey, specials also cited being mismanaged and feeling undervalued as reasons for leaving. Conway, the Metropolitan special constabulary chief (and Transport for London manager by day), is determined to reverse this and grow his force by nearly 90% over two years. Like police chiefs across the country, he is also working to give them ever more skilled roles. But is it right that volunteers should have quietly assumed some of Britains most critical policing work? And as forces seek to hire more special constables, how much further could their duties extend?

***

For much of modern history, specials were treated as a hobby-bobby joke, according to Iain Britton, a senior criminal justice researcher at the University of Northampton. Aside from helping out during major disturbances, many specials spent significant time doing humdrum tasks such as guiding traffic and patrolling local fetes or open days. Little more than a decade ago, they were routinely seen by full-time officers as liabilities and overtime stealers who lacked experience. When I meet special inspector David Lane at the Metropolitan police marine headquarters in east London, he quotes the old music hall song, My Old Man Said Follow the Van, which implies volunteers couldnt even navigate: You cant trust these specials like the old-time coppers / When you cant find your way home.

For years they were treated as a hobby bobby joke and by full-time officers as liabilities and overtime stealers

When Lane, 58, joined Londons marine policing unit in 1991, his fellow specials had little to do. He recalls colleagues in this small squad, which patrols the River Thames, spending their time relaxing over picnics and barbecues on quiet islands. Since then, he has found three floating bodies and arrested pickpockets on the riverbank, who werent expecting officers to approach from the water. Lane uses policing to wind down from his work as an international cybersecurity consultant. I always found doing something totally alien to your day job is a form of relaxation. He recently started in a new role, interviewing and training other specials.

Special inspector Wong (a commercial barrister by day), has also seen big changes. When he started policing in 2007, regular officers who had good relationships with specials invited them to join 999 shifts, but this wasnt widespread. Wong has since watched police stations close and emergency responders in his London district drop to around a third of their numbers a decade ago. In the past, we were always there to provide support, Wong tells me. Now we are becoming more of a fixture.

He loves swapping his barristers gown for a police stab vest. The immediacy of breaking up fights and calming angry members of the public contrasts with the indoor meetings and intellectual analysis of his legal work. Plus, as a former magic-circle City lawyer, he says hes financially comfortable and can afford to take paid time off for policing; he volunteers for around 48 hours monthly.

Sergeant Anna Kennedy became a special eight years ago. After a quiet first shift drinking tea, the 50-year-old British Airways flight attendant made her first arrest during a drugs raid on a loft filled with cannabis plants. She was then assigned to secure the Ecuadorian embassy, where WikiLeaks founder Julian Assange had recently taken refuge. Standing on the fire escape, watching Assange cook his dinner, Kennedy mused on the bizarre situation in which she found herself. Passing WikiLeaks supporters would heckle her for obstructing a freedom fighter. [Other officers] were saying to me, If he tries to get out through the back, youve got to stop him, she tells me, as planes descend on the runway behind her at Heathrow. And Im thinking, Oh my God, Ive been policing for seven months, I cant stop Julian Assange.

Kennedy recalls being told when she started that her duties would consist of house-to-house inquiries, and patrolling fairs and parades. But a fortnight before we meet, she was among the first uniformed officers on the scene after colleagues found two men with a gun near a pub. She was tasked with securing the area and searching the suspects homes for other weapons.

Last spring, a team of 55 specials replaced full-time emergency responders for an entire nine-hour shift in London

Kennedy has even turned to writing crime thrillers based on her experiences. Her first novel tells the story of a special sergeant who becomes embroiled in a murder, kidnapping and money-laundering investigation.

Over the past few years, some parts of London and Kent have experimented with specials completely taking over emergency policing. Last spring, special chief inspector Baljit Badesha, 31, led 55 specials who replaced full-time emergency responders for an entire nine-hour shift in north London, partly to allow overstretched police time to catch up on paperwork. The team made a string of arrests, including for serious assaults, sexual offences and robberies.

Badesha never planned to join the police. As a brown-skinned teenager, he often felt stigmatised by officers, particularly following terrorist attacks in the 2000s. As a medical student, he was once grabbed, handcuffed and searched in the street. But, in 2009, he saw an advertisement for specials and decided to represent his community. (Specials are notably more diverse than regulars: 11.1% are from BAME backgrounds, compared with 6.9% of full-timers.) Badesha has since helped arrest two armed robbers, one of whom drew a handgun. In 2014, he was asked to join an investigation into the theft of around 70,000 from an elderly woman by her care worker. Last year, he became a chief inspector, the third most senior rank in the Mets specials.

Badesha finds the up-to-40 hours a month he spends policing alongside his day job working for the council addictive, and likens it to any other hobby. Some people go and watch movies, he says, echoing the sentiment I hear from several specials; that the work can be more concrete and meaningful than other jobs, and provides a sense of comradeship often lacking in modern life.

Back on the west London night shift, that sense of togetherness is clear, especially when the police pull over for a late dinner at a petrol station. This is one of our staples. The other is McDonalds, says Jamie, the graduate, who last year spent a monthly average of 111 hours volunteering alongside his studies. There is a lot of banter, and a debate over the merits of deep-fried Mars bars and pizzas (Mate, they are the shiz, says the construction worker). In the background, the police radio announces that a prisoner is being dropped off at the nearby custody cells. After wolfing down sandwiches and chocolates, the officers are soon back on shift.

***

On a Sunday afternoon last autumn, I head to Wakefield, West Yorkshire, to see where volunteers are trained. West Yorkshire polices modern base includes a firearms range, police dog kennels and a helicopter station. Boris Johnson came here last summer to launch his drive for more full-time police, although he was criticised for turning the appearance into an election-style pitch (an officer behind him fainted in the heat).

Specials train in a large hangar with mock streets, shops, pubs and custody cells. In a sports hall, aspiring officers are handcuffing each other and learning how to escape headlocks, part of their 13 weekends of basic training. (Although specials do the same core safety work as full-timers, their overall training tends to be significantly shorter.) We will run them up and down, get them tired and out of breath, says the trainer, describing how volunteers must be puffing and panting to simulate a foot chase.

Next door, the newest specials form a military-style parade, before swearing the police oath, promising to serve the Queen, and to uphold human rights and the law. They collect their warrant cards, surrounded by applauding family. Please understand that you are police officers, Mark Ridley, a local police chief, tells the graduates from the stage, emphasising that they will have the same responsibilities as full-time constables, and that citizens see no difference between them (the uniforms are virtually identical).

The cohort of 12 includes an entrepreneur, a nurse and a 21-year-old criminology graduate who works for McDonalds. Jane, 49, an assistant manager for an electronics shop, wells up as she accepts an award for the most outstanding in her class. She says she had dreamed of becoming a policewoman when she first finished school, but had been ineligible because she is two inches below the old minimum height requirement (abolished in 1990). I am 5ft 2in and a smidge on a good day, she says at the coffee reception after the ceremony, adding that she hopes one day to police in the off-road bike squad, fighting motorbike crime. The new volunteers are a committed group: when I check in with them three weeks later, they have already policed for, on average, 44 hours each. Like many specials, several are interested in becoming full-time officers and want to test the job first.

One special, aged 75, last year pursued a 29-year-old in a high-speed car chase

Apart from some professions with a potential conflict of interest, such as parking wardens and soldiers, there are few limits on who can become a special. (Offensive tattoos and drugs are banned, and a criminal record may be a disqualification.) There are volunteers who work as undertakers and university professors, priests and pilots. Some just cant get enough of policing: after more than 20 years of volunteering, Essex special constable Keith Smith, 75, is still subduing suspects; last year he pursued a 29-year-old man in a high-speed car chase, then ran after the suspect into a garden and arrested him.

In London, Conway hopes to achieve his ambitious expansion of specials in part through a national scheme encouraging businesses to give employees time off to do police work. He has also sought to make the work more varied; this may be one reason why more elite units, such as royalty and diplomatic protection teams, have opened up to specials in recent years. Some forces now plan to take specials powers further; Kent police is among those seeking government approval for some volunteers to carry Tasers.

Ian Acheson, a former volunteer with Devon and Cornwall police, who stepped down in 2012, is among those who are concerned about specials expanding roles. The security consultant and former prison governor describes volunteer policing as the best fun you can possibly have with your clothes on, but points out that specials work fewer and more inconsistent hours than regular police, so leaning on them for critical duties is risky. Acheson believes specials should instead focus on neighbourhood work, which has historically been the bread and butter of policing. Thats what the public wants to see, he says. Neighbourhood policing has been absolutely decimated and in hard-pressed communities, plagued by low-level crime, people are crying out for it.

One of the last specials I speak to, Constable Nor (she doesnt want her full name used), agrees that volunteers have a vital community role. When we meet at her familys restaurant, the 38-year-old Lebanese-born PhD student and part-time law teacher tells me she sees specials as a link between regular citizens and law enforcers. Its all based on understanding peoples needs and culture, she says, between smoking shisha and grilling halloumi cheese. Since joining in 2016, Nor has done numerous early-morning drugs raids and 999 response shifts. She has also worked with S015, the Metropolitan polices counterterrorism command, engaging with Muslim communities and leaders.

Having interviewed and watched dozens of volunteers at work, it is clear that many are talented, with, in some cases, better people skills than those of regular constables. But as mostly occasional officers, their reflexes and policing knowledge are likely to be less fine-tuned; by their own admission, it is easy for a volunteers confidence to drop. If you are not doing it all the time, your skills attrition can be quite high, says one Metropolitan special. You forget things.

But without specials, Britain would undoubtedly be less safe. For now, at least, they will keep fighting emergencies, sometimes the only people available to respond immediately.

Back in west London, the 999 calls continue to stream through police radios. Somebody is assaulting their partner with metal corn-on-the-cob sticks. A supermarket worker is being attacked. A man is wandering the streets wielding a machete. Outside the pub, special constables Silvia and Tusalan try to pacify the drunk man, whose alleged attacker has left the area. Dont look [at me] like Im stupid, the man shouts at them, stumbling about as his words grow increasingly incomprehensible. Im clever. Youre not a solicitor, youre not a judge, youre police officers. Theyre not, exactly, but they may be the next best thing.

Read more:
I was told to stop Julian Assange if he tried to flee: on the beat with the UKs volunteer police - The Guardian

As Bitcoin Struggles, This Tiny Cryptocurrency Has Soared A Massive 230% – Forbes

Bitcoin and cryptocurrency watchers are nervously waiting for bitcoin to make another move after a sudden sell-off this week.

The bitcoin price, the main driver of the cryptocurrency market, had been more-or-less trading sideways after rallying hard through April.

Now, one small cryptocurrency that isn't even in the top 30 most valuable tokens has suddenly soaredclimbing a staggering 230% over the last month.

Many bitcoin and crypto analysts are worried the bitcoin price could be heading lower before it ... [+] rallies again--but some small cryptocurrencies, such as omiseGO, have outperformed the wider market.

OmiseGO, an ethereum token that powers a smart contract platform and trades as OMG, was sent sharply higher after San Francisco-based bitcoin and cryptocurrency exchange Coinbase revealed it would list the token.

"The good ol' Coinbase listing pump is back," Larry Cermak, director of research at bitcoin and crypto news and analysis outlet The Block, said via Twitter, pointing to OmiseGO's sharp rally since "it was announced that it's listing on Coinbase."

OmiseGO's smart contract platform, based in Bangkok, is designed to facilitate the movement of funds between traditional payment systems and decentralized blockchains like ethereum.

The omiseGO price began climbing earlier this month after Coinbase, the largest U.S. bitcoin and crypto exchange, said it would allow Coinbase Pro users to make inbound OmiseGo transfers.

OmiseGO, which has a market value of just $257 million compared to bitcoin's $170 billion, jumped again this week after Coinbase said it would fully list the minor cryptocurrency everywhere but in New York State.

"Coinbase customers can now buy, sell, convert, send, receive, or store OMG," Coinbase said in a blog post on Thursday announcing the listing.

The OMG price is still heavily down on its all-time high of almost $30 per token set in late 2017 as bitcoin and cryptocurrency mania was sweeping the globe.

The omiseGo price has soared by 234% in just a month as investors cheer its new Coinbase listing.

The likes of bitcoin and other major cryptocurrencies have also failed to return to their all-time highs, with the bitcoin price now trading around half its December 2017 high.

Some smaller cryptocurrencies, such as chainlink and tezos, have rallied hard in recent months, however, pushed higher by demand for decentralized finance platforms.

Meanwhile, the broader bitcoin and cryptocurrency market is closely-watching for price swings after bitcoin went through a supply squeeze earlier this month.

The number of bitcoin rewarded to those that maintain the bitcoin network, called miners, was cut by half, dropping from 12.5 bitcoin to 6.25 on May 11.

Some had warned the bitcoin price could crash in the aftermath of the third halving but most analysts seem confident the bitcoin price will climb eventually.

Read the original:
As Bitcoin Struggles, This Tiny Cryptocurrency Has Soared A Massive 230% - Forbes