Bitcoin (BTC) Plunges to $8,300; Heres What Analysts Are Thinking – Ethereum World News

Bitcoin really hasnt done well over the past day. After printing a false breakout candle on Saturday that brought the price of the asset to $9,200, there was a rapid and violent reversal. What followed was a dramatic and steep downtrend that has taken the price of BTC as low as $8,300 just last hour (as of the time of this articles writing), marking a 10% drop from the highs.

Here are what analysts expecting is next for Bitcoin.

Although the price action that has transpired over the past 24 hours has been decisively bearish for Bitcoin, there are some expecting the asset to bounce.

NebraskanGooner, a founder of exchange Level and a noted crypto trader, remarked that as scary as that drop was, Bitcoin has fallen to his daily trendline support, depicted below. The trendline has acted as both resistance and support for at least two months now, suggesting it is a crucial level to keep an eye on.

With BTC currently holding the trendline NebraskanGooner indicated, he suggested that there is a chance that it can rally 15% or so back to $9,500, the top of the range he defined in the chart above.

NebraskanGooner isnt the only bull in this environment.

Prominent trader Big Cheds recently wrote that Bitcoin has his permission to bounce now. Backing this lofty sentiment, hepointed to a chart that showed that Bitcoin has found support at $8,400 over three times in the past few weeks.

Despite this bullish sentiment, the ball is seemingly in the court of bears, so to say. (Case in point, the price of Bitcoin has fallen to $8,200 in the minutes that Ive been writing this article.)

Cryptocurrency consultancy founder Burger remarked that Bitcoin could be printing a bearish head and shoulders pattern, which could mark a medium-term reversal for the price of BTC:

H&S pattern on the daily chart for $BTC which often marks the start of a reversal.

He added that with the existence of the coronavirus FUD, there may be some adverse effects on the cryptocurrency market as can be seen with traditional markets already.

See the original post:
Bitcoin (BTC) Plunges to $8,300; Heres What Analysts Are Thinking - Ethereum World News

Bitcoin Halving is Less Than 10000 Blocks Away, Will Prices Soar? – Bitcoinist

With just over two months to go and BTC still struggling under $9K, will Bitcoins halving really affect its price?

The Bitcoin halving is currently less than 10000 blocks away, as tweeted out by Bitcoin core developer and educator Jimmy Song. The majority of people in the space anticipate it will have a major impact on bitcoins price. This is for several reasons.

Just as the supply of bitcoins is limited to 21 million, the mining reward for generating new blocks is reduced every four years or every 210,000 blocks. It is cut in half, hence the term halving (or halving). This will carry on until all the 21 million bitcoins are released into circulation.

With the capped supply, Nakamoto ensured that Bitcoin, unlike fiat currencies will never lose its purchasing power over time. In fact, a capped supply dramatically increases BTCs odds of steadily increasing in price in the future.

This rise in price is what allows mining bitcoins to still be profitable to miners even with a reduced reward over time.

The mining reward is made up of the block subsidy and the transaction fees. The subsidy consists of newly generated bitcoins and is currently the largest part of the reward. The other part is made up of transaction fees paid by all the transactions included in the block.

The current reward is 12.5 bitcoins plus TX fees for the discovery of a new block. After the next Bitcoin halving the mining reward will be cut in half to 6.25 BTC. This will carry on until all bitcoins are released, at which point the network should be sustainable on transaction fees alone.

The first Bitcoin halving happened on Nov 28, 2012, when the mining reward was reduced to 25 bitcoins. At the time of the halving, the price of BTC was approximately $11. Over the next year, Bitcoin would see its price increase to as much as $1,135 on Nov 29, 2013. A dramatic hike of 10,218%.

The second Bitcoin halving occurred on July 16, 2016, when the reward was reduced to its current rate of 12.5 bitcoins per block. This time around, the price did not react immediately.

In fact, after the last halving, BTC was locked in a rather dull trading range of between $500 and $800. This lasted all the way through to the end of the year. Then, on Dec 21, 2016, the price penetrated $800 and the halving rally was underway at last.

Over the next 12 months, an explosive bull market ensued with Bitcoin reaching its all-time high os $19,862 on Dec 18, 2017. A 2,827% percentage hike. So, based on these past results, its not surprising the community is getting excited.

Many prominent analysts in the space expect the halving to have a dramatic impact on bitcoins price. These include Fundstrat Managing Partner Thomas Lee, who sees bitcoins price more than tripling in 2020.

Other major influencers including Morgan Creek Digitals Anthony Pompliano have frequently tweeted out their excitement over the upcoming event.

Their enthusiasm is echoed by traders and HODLers alike who believe that the price of bitcoin will explode to the upside very soon.

However, its not a hard and fast rule that history will repeat itself. As one Redditor commented:

Its a game of supply and demand. The halving reduces the supply.. so if demand stays the same price will have to go up.

Februarys price decline was a decisive blow to the Bitcoin bulls. If demand decreases and prices dwindle, the mining reward could leave miners struggling and even force them out of business.

Even though bitcoin maximalists like Max Keiser are calling for a $400K bitcoin soon, its quite unlikely that bitcoin will see a dramatic price increase the likes of the previous two halvings.

In fact, there was a large reduction in terms of percentage gains from 2016 halving compared to 2012some 72% less.

So lets make an educated guess. If we take in the assumption that the rally will be 72% less than the 2016 halving, then we can expect BTC to make a substantial gain of 797% this time around.

Based on a BTC price of $9k on the next halving, we could expect to see its price reach as much as $71,730 in about 12 to 18 months from May 2020.This means that BTC price may not see any dramatic action for at least a year after the next halving.

Of course, these are just predictions and its impossible to predict the future direction of any speculative asset. But, with the information at hand, it looks likely that 2021 will be a good year for BTC price.

Will Bitcoins price react positively to the upcoming BTC halving? Let us know your thoughts below!

Images via Shutterstock, Twitter: @jimmysong, @CryptoManagers, @PBlockstar, @APompliano

Read the original here:
Bitcoin Halving is Less Than 10000 Blocks Away, Will Prices Soar? - Bitcoinist

Crypto Bulls Roadshow Coming to Over 15 Indian Cities With Government Participation – Bitcoin News

Indias Crypto Bulls Roadshow, a nonprofit initiative to prepare India for the next bull run, is coming up, and government organizations are joining the drive. Currently, 15 cities in India are planned for but more may be added based on demand. There is no fee to participate in the roadshow and there will be online voting for top influencers of the Indian crypto ecosystem.

Also read: Bitcoin Legal in India Exchanges Resume INR Banking Service After Supreme Court Verdict Allows Cryptocurrency

Indias Crypto Bulls Roadshow is a nonprofit initiative by Kumar Gaurav, CEO of crypto banking platform Cashaa, and Gaurav Dubey, CEO of blockchain investment advisory firm O1ex. Cashaa launched its Indian operations in October last year. O1ex, a Dubai-headquartered company with IT operations based out of Kanpur, India, will be the sponsor of the roadshow.

The event aims to educate Indian crypto users about real blockchain technology to prepare India for the next bull run, the roadshow website details, adding that it will showcase crypto projects, create public awareness, and build a strong Indian crypto community. The website continues:

Now, its time to prepare India for the next bull run and show the world that India is not less than the USA or China.

Cashaas Gaurav shared with news.Bitcoin.com that the roadshow is an initiative to bring back the Indian crypto industry together after the huge damage, due to the banking restriction imposed by the central bank.

The roadshow document describes: In the recent supreme court hearing, it became clear that crypto is not illegal in India. It was nearly two years ago that the Reserve Bank of India clamped down on a fast-growing market for cryptocurrencies in the country. That impacted the cash on-ramp for the crypto market in India even though there is no legal ban on their use in the country. The Supreme Court of India quashed the RBI ban on the crypto industry on Wednesday.

Cashaas CEO added:

Many government organizations such as law enforcement (police) and municipal corporations are also joining this drive, to educate citizens about the bitcoin and bring awareness to protect people from scams crypto. After the holiday (12/03/2020) we are expecting huge participation from the other organization and regulators such as Income Tax, SEBI to be part of this program.

The current plan is for the roadshow to start on April 3 and run through April 26. But due to current excitement in India, we might add a few more cities during the roadshow, due to which it may last up to 30th April, Gaurav revealed to news.Bitcoin.com, elaborating:

Due to the supreme court verdict, the revolution has grown up and bitcoin community managers and evangelists from many different cities have joined it, so far we have added Chennai, Visakhapatnam, Bhubaneswar, Kolkata, Patna, Kanpur, covering total 7,000 Kms. The start date will be 3rd April.

The 15 cities planned for so far are Delhi, Jaipur, Udaipur, Ahmedabad, Surat, Mumbai, Pune, Hyderabad, Bengaluru, Chennai, Visakhapatnam, Bhubaneswar, Kolkata, Patna, and Kanpur.

Prior to the actual roadshow, there will be meetings with all the exchanges and projects participating in the event. The chain of meetups, meetings with the local governments, large enterprises, and sessions at the top accelerators with 500 plus startups will create an everlasting ripple effect across the nation, the roadshow website notes.

The current roadshow timetable is as follows:New Delhi April 3 and 4 (2 events in North and South Delhi),Jaipur April 5Udaipur April 7Ahmedabad April 8Surat April 9Mumbai April 10 and 11Pune April 12 and 13Hyderabad April 15, andBangalore April 17

Crypto projects, influencers, event organizers, traders, and investors from around the world are invited to participate in the roadshow. Attendees will soon be able to select the city and register for the event on the Crypto Bulls Roadshow website (Cryptobulls.in), Gaurav confirmed, noting:

There is no fee unlike any other events in India for participants. The India Crypto Bulls will be a pure crypto event with a focus on the adoption of the public chain.

As part of the event, there will be Online voting to pick the best exchange, best blockchain project and Indian influencer, Gaurav further said. Winner of each category will receive the award at the Gala dinner at the end of the roadshow (venue will be announced by 30th March) and represent India in New York, the USA on 12th May as a speaker at Consensus 2020 in the India Crypto Bulls segment on stage. He added that online voting will start on March 16 and will continue to the end of the roadshow but nominations are open now.

What do you think of this India Crypto Bulls Roadshow? Do you want to participate? Let us know in the comments section below.

Disclaimer: This article is for informational purposes only. It is not an offer or solicitation of an offer to buy or sell, or a recommendation, endorsement, or sponsorship of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Images courtesy of Shutterstock, Cashaa, and India Crypto Bulls Roadshow.

Did you know you can buy and sell BCH privately using our noncustodial, peer-to-peer Local Bitcoin Cash trading platform? The local.Bitcoin.com marketplace has thousands of participants from all around the world trading BCH right now. And if you need a bitcoin wallet to securely store your coins, you can download one from us here.

A student of Austrian Economics, Kevin found Bitcoin in 2011 and has been an evangelist ever since. His interests lie in Bitcoin security, open-source systems, network effects and the intersection between economics and cryptography.

See more here:
Crypto Bulls Roadshow Coming to Over 15 Indian Cities With Government Participation - Bitcoin News

Doing machine learning the right way – MIT News

The work of MIT computer scientist Aleksander Madry is fueled by one core mission: doing machine learning the right way.

Madrys research centers largely on making machine learning a type of artificial intelligence more accurate, efficient, and robust against errors. In his classroom and beyond, he also worries about questions of ethical computing, as we approach an age where artificial intelligence will have great impact on many sectors of society.

I want society to truly embrace machine learning, says Madry, a recently tenured professor in the Department of Electrical Engineering and Computer Science. To do that, we need to figure out how to train models that people can use safely, reliably, and in a way that they understand.

Interestingly, his work with machine learning dates back only a couple of years, to shortly after he joined MIT in 2015. In that time, his research group has published several critical papers demonstrating that certain models can be easily tricked to produce inaccurate results and showing how to make them more robust.

In the end, he aims to make each models decisions more interpretable by humans, so researchers can peer inside to see where things went awry. At the same time, he wants to enable nonexperts to deploy the improved models in the real world for, say, helping diagnose disease or control driverless cars.

Its not just about trying to crack open the machine-learning black box. I want to open it up, see how it works, and pack it back up, so people can use it without needing to understand whats going on inside, he says.

For the love of algorithms

Madry was born in Wroclaw, Poland, where he attended the University of Wroclaw as an undergraduate in the mid-2000s. While he harbored interest in computer science and physics, I actually never thought Id become a scientist, he says.

An avid video gamer, Madry initially enrolled in the computer science program with intentions of programming his own games. But in joining friends in a few classes in theoretical computer science and, in particular, theory of algorithms, he fell in love with the material. Algorithm theory aims to find efficient optimization procedures for solving computational problems, which requires tackling difficult mathematical questions. I realized I enjoy thinking deeply about something and trying to figure it out, says Madry, who wound up double-majoring in physics and computer science.

When it came to delving deeper into algorithms in graduate school, he went to his first choice: MIT. Here, he worked under both Michel X. Goemans, who was a major figure in applied math and algorithm optimization, and Jonathan A. Kelner, who had just arrived to MIT as a junior faculty working in that field. For his PhD dissertation, Madry developed algorithms that solved a number of longstanding problems in graph algorithms, earning the 2011 George M. Sprowls Doctoral Dissertation Award for the best MIT doctoral thesis in computer science.

After his PhD, Madry spent a year as a postdoc at Microsoft Research New England, before teaching for three years at the Swiss Federal Institute of Technology Lausanne which Madry calls the Swiss version of MIT. But his alma mater kept calling him back: MIT has the thrilling energy I was missing. Its in my DNA.

Getting adversarial

Shortly after joining MIT, Madry found himself swept up in a novel science: machine learning. In particular, he focused on understanding the re-emerging paradigm of deep learning. Thats an artificial-intelligence application that uses multiple computing layers to extract high-level features from raw input such as using pixel-level data to classify images. MITs campus was, at the time, buzzing with new innovations in the domain.

But that begged the question: Was machine learning all hype or solid science? It seemed to work, but no one actually understood how and why, Madry says.

Answering that question set his group on a long journey, running experiment after experiment on deep-learning models to understand the underlying principles. A major milestone in this journey was an influential paper they published in 2018, developing a methodology for making machine-learning models more resistant to adversarial examples. Adversarial examples are slight perturbations to input data that are imperceptible to humans such as changing the color of one pixel in an image but cause a model to make inaccurate predictions. They illuminate a major shortcoming of existing machine-learning tools.

Continuing this line of work, Madrys group showed that the existence of these mysterious adversarial examples may contribute to how machine-learning models make decisions. In particular, models designed to differentiate images of, say, cats and dogs, make decisions based on features that do not align with how humans make classifications. Simply changing these features can make the model consistently misclassify cats as dogs, without changing anything in the image thats really meaningful to humans.

Results indicated some models which may be used to, say, identify abnormalities in medical images or help autonomous cars identify objects in the road arent exactly up to snuff. People often think these models are superhuman, but they didnt actually solve the classification problem we intend them to solve, Madry says. And their complete vulnerability to adversarial examples was a manifestation of that fact. That was an eye-opening finding.

Thats why Madry seeks to make machine-learning models more interpretable to humans. New models hes developed show how much certain pixels in images the system is trained on can influence the systems predictions. Researchers can then tweak the models to focus on pixels clusters more closely correlated with identifiable features such as detecting an animals snout, ears, and tail. In the end, that will help make the models more humanlike or superhumanlike in their decisions. To further this work, Madry and his colleagues recently founded the MIT Center for Deployable Machine Learning, a collaborative research effort working toward building machine-learning tools ready for real-world deployment.

We want machine learning not just as a toy, but as something you can use in, say, an autonomous car, or health care. Right now, we dont understand enough to have sufficient confidence in it for those critical applications, Madry says.

Shaping education and policy

Madry views artificial intelligence and decision making (AI+D is one of the three new academic units in the Department of Electrical Engineering and Computer Science) as the interface of computing thats going to have the biggest impact on society.

In that regard, he makes sure to expose his students to the human aspect of computing. In part, that means considering consequences of what theyre building. Often, he says, students will be overly ambitious in creating new technologies, but they havent thought through potential ramifications on individuals and society. Building something cool isnt a good enough reason to build something, Madry says. Its about thinking about not if we can build something, but if we should build something.

Madry has also been engaging in conversations about laws and policies to help regulate machine learning. A point of these discussions, he says, is to better understand the costs and benefits of unleashing machine-learning technologies on society.

Sometimes we overestimate the power of machine learning, thinking it will be our salvation. Sometimes we underestimate the cost it may have on society, Madry says. To do machine learning right, theres still a lot still left to figure out.

See the original post:
Doing machine learning the right way - MIT News

What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps – The Register

Achieving production-level governance with machine-learning projects currently presents unique challenges. A new space of tools and practices is emerging under the name MLOps. The space is analogous to DevOps but tailored to the practices and workflows of machine learning.

Machine learning models make predictions for new data based on the data they have been trained on. Managing this data in a way that can be safely used in live environments is challenging, and one of the key reasons why 80 per cent of data science projects never make it to production an estimate from Gartner.

It is essential that the data is clean, correct, and safe to use without any privacy or bias issues. Real-world data can also continuously change, so inputs and predictions have to be monitored for any shifts that may be problematic for the model. These are complex challenges that are distinct from those found in traditional DevOps.

DevOps practices are centred on the build and release process and continuous integration. Traditional development builds are packages of executable artifacts compiled from source code. Non-code supporting data in these builds tends to be limited to relatively small static config files. In essence, traditional DevOps is geared to building programs consisting of sets of explicitly defined rules that give specific outputs in response to specific inputs.

In contrast, machine-learning models make predictions by indirectly capturing patterns from data, not by formulating all the rules. A characteristic machine-learning problem involves making new predictions based on known data, such as predicting the price of a house using known house prices and details such as the number of bedrooms, square footage, and location. Machine-learning builds run a pipeline that extracts patterns from data and creates a weighted machine-learning model artifact. This makes these builds far more complex and the whole data science workflow more experimental. As a result, a key part of the MLOps challenge is supporting multi-step machine learning model builds that involve large data volumes and varying parameters.

To run projects safely in live environments, we need to be able to monitor for problem situations and see how to fix things when they go wrong. There are pretty standard DevOps practices for how to record code builds in order to go back to old versions. But MLOps does not yet have standardisation on how to record and go back to the data that was used to train a version of a model.

There are also special MLOps challenges to face in the live environment. There are largely agreed DevOps approaches for monitoring for error codes or an increase in latency. But its a different challenge to monitor for bad predictions. You may not have any direct way of knowing whether a prediction is good, and may have to instead monitor indirect signals such as customer behaviour (conversions, rate of customers leaving the site, any feedback submitted). It can also be hard to know in advance how well your training data represents your live data. For example, it might match well at a general level but there could be specific kinds of exceptions. This risk can be mitigated with careful monitoring and cautious management of the rollout of new versions.

The effort involved in solving MLOps challenges can be reduced by leveraging a platform and applying it to the particular case. Many organisations face a choice of whether to use an off-the-shelf machine-learning platform or try to put an in-house platform together themselves by assembling open-source components.

Some machine-learning platforms are part of a cloud providers offering, such as AWS SageMaker or AzureML. This may or may not appeal, depending on the cloud strategy of the organisation. Other platforms are not cloud-specific and instead offer self-install or a custom hosted solution (eg, Databricks MLflow).

Instead of choosing a platform, organisations can instead choose to assemble their own. This may be a preferred route when requirements are too niche to fit a current platform, such as needing integrations to other in-house systems or if data has to be stored in a particular location or format. Choosing to assemble an in-house platform requires learning to navigate the ML tool landscape. This landscape is complex with different tools specialising in different niches and in some cases there are competing tools approaching similar problems in different ways (see the Linux Foundations LF AI project for a visualization or categorised lists from the Institute for Ethical AI).

The Linux Foundations diagram of MLOps tools ... Click for full detail

For organisations using Kubernetes, the kubeflow project presents an interesting option as it aims to curate a set of open-source tools and make them work well together on kubernetes. The project is led by Google, and top contributors (as listed by IBM) include IBM, Cisco, Caicloud, Amazon, and Microsoft, as well as ML tooling provider Seldon, Chinese tech giant NetEase, Japanese tech conglomerate NTT, and hardware giant Intel.

Challenges around reproducibility and monitoring of machine learning systems are governance problems. They need to be addressed in order to be confident that a production system can be maintained and that any challenges from auditors or customers can be answered. For many projects these are not the only challenges as customers might reasonably expect to be able to ask why a prediction concerning them was made. In some cases this may also be a legal requirement as the European Unions General Data Protection Regulation states that a "data subject" has a right to "meaningful information about the logic involved" in any automated decision that relates to them.

Explainability is a data science problem in itself. Modelling techniques can be divided into black-box and white-box, depending on whether the method can naturally be inspected to provide insight into the reasons for particular predictions. With black-box models, such as proprietary neural networks, the options for interpreting results are more restricted and more difficult to use than the options for interpreting a white-box linear model. In highly regulated industries, it can be impossible for AI projects to move forward without supporting explainability. For example, medical diagnosis systems may need to be highly interpretable so that they can be investigated when things go wrong or so that the model can aid a human doctor. This can mean that projects are restricted to working with models that admit of acceptable interpretability. Making black-box models more interpretable is a fast-growth area, with new techniques rapidly becoming available.

The MLOps scene is evolving as machine-learning becomes more widely adopted, and we learn more about what counts as best practice for different use cases. Different organisations have different machine learning use cases and therefore differing needs. As the field evolves well likely see greater standardisation, and even the more challenging use cases will become better supported.

Ryan Dawson is a core member of the Seldon open-source team, providing tooling for machine-learning deployments to Kubernetes. He has spent 10 years working in the Java development scene in London across a variety of industries.

Bringing DevOps principles to machine learning throws up some unique challenges, not least very different workflows and artifacts. Ryan will dive into this topic in May at Continuous Lifecycle London 2020 a conference organized by The Register's mothership, Situation Publishing.

You can find out more, and book tickets, right here.

Sponsored: Quit your addiction to storage

More:
What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps - The Register

Innovative AI and Machine-Learning Technology That Detects Emotion Wins Top Award – PR.com

Campaigns & Elections Reed Award winners represent the best-of-the-best in the political campaign and advocacy industries. CampaignTesters' proprietary platform aims to deliver key audience insights for organizations to validate, revise and perfect their video content messaging.

CampaignTester is a cutting-edge mobile-based platform that utilizes emotion analytics and machine learning to detect a users emotion and engagement level while watching video content. Their proprietary platform aims to deliver key audience insights for organizations to validate, revise and perfect their video content messaging.

Campaigns & Elections Reed Award winners represent the best-of-the-best in the political campaign and advocacy industries. The 2020 Reed Awards honored winners across 16 distinct category groups, representing the different specialisms of the political campaign industry, with distinct category groups for International (non-US) work, and Grassroots Advocacy work.

It was particularly meaningful being recognized among some of the finest marketers and technologists in the world, Bill Lickson, CampaignTesters Chief Operating Officer affirmed. I was thrilled and honored to accept this prestigious award on behalf of our entire talented team.

Aaron Itzkowitz, Chief Executive Officer and Founder of CampaignTester added, This award is a great start to what looks to be a wonderful year for our client-partners and our company. While our technology was recognized for excellence in political marketing, our technology is for any industry that uses video in marketing.

About Campaigns & Elections Reed AwardsThe Campaigns & Elections Reed Awards, named after Campaigns & Elections founder Stanley Foster Reed, recognizes excellence in political campaigning, campaign management, political consulting and political design, grassroots & advocacy.

For more information about CampaignTester, visit CampaignTester.com or contact Press@campaigntester.com, 352-247-7865

Read the original here:
Innovative AI and Machine-Learning Technology That Detects Emotion Wins Top Award - PR.com

This website uses machine learning and your webcam to train you not to touch your face – Boing Boing

By not touching your face, you reduce the chances of getting sick from a virus or bacteria. This website, called Do Not Touch Your Face, uses your webcam to analyze your face and alert you with a tone if it catches you touching your face.

From the FAQ

How does this work?

Using your webcam, you train a machine learning algorithm (specificallyTensorflow.js) to recognize you touching your face and not touching your face. Once it's trained, it watches and alerts you when you touch your face.

Why shouldn't I touch my face?

TheCDC recommendsnot touching your face as one action you can take to prevent getting COVID-19. Other things you should do: stay home if you're sick and avoid contact with other sick people. But you probably knew that already.

The alerts aren't working!

Try refreshing the page and trying again. Every time you reload the page, the algorithm retrains itself.

Do you keep my information?

Nope. This entire site runs locallyall the calculations from your webcam and alerts are done on your computer and are never sent over the internet.

Will this stop me from getting COVID-19?

Not for sure, but it might help.

Who made this?

This was made with love and fear byMike Bodge,Brian Moore, andIsaac Blankensmith. Be safe out there.

Cleethorpes is a faded northern English resort town whose inherent grimness is leavened by low rainfall and a nice sandy beach. And now it is to become home to a giant white metal palm tree, to the dismay of some locals. Artist Wolfgang Weileder has said the sculpture will serve as a warning for the []

Since the 60s Genesis P-Orridge has been one of the masterminds behind artist collective COUM Transmissions and seminal music acts Throbbing Gristle and Psychic TV. Beyond that, P-Orridge has had an astonishing career in the visual arts, founding an artist collective called Thee Temple ov Psychick Youth, as well as helming the infamous pandrogeny project []

Colossal writes: Designed to recycle outdated electronics, multiple musical projects by Electronicos Fantasticos utilize a version of the barcode system found on every package on store shelves. When scanned, each pattern sends a signal to its audio component, emitting the corresponding sound wave. The black and white stripes produce a variety of rhythmic and tonal []

In an age where blockbuster MMOs and aggressive action-adventure games dominate the landscape, theres always something to be said for smart, atmospheric, slow-burn gaming that truly forces players to stretch their minds rather than their firepower to notch a victory. Thats why the sci-fi themed, first-person puzzler Lightmatter has already started building a following as []

Tech moves so fast that practically the minute you lift the latest, fastest, most tricked-out new laptop on the market off the store shelf, the staff is filling that space with a newer, faster, even more, tricked-out model. Thats just the speed of advancement and that march is unstoppable. So instead of paying a []

Back in 2007, high schooler Mike Radenbaugh got tired of pumping his old bicycle back and forth to campus every day. Instead, he pulled together some parts, attached an electric motor to his bike and his first e-bike was born. It wouldnt be his last. 13 years later, Radenbaugh heads up Seattle-based Rad Power []

More:
This website uses machine learning and your webcam to train you not to touch your face - Boing Boing

Machine Learning Software Market Increasing Demand with Leading Player, Comprehensive Analysis, Forecast to 2026 – News Times

The report on the Machine Learning Software Market is a compilation of intelligent, broad research studies that will help players and stakeholders to make informed business decisions in future. It offers specific and reliable recommendations for players to better tackle challenges in the Machine Learning Software market. Furthermore, it comes out as a powerful resource providing up to date and verified information and data on various aspects of the Machine Learning Software market. Readers will be able to gain deeper understanding of the competitive landscape and its future scenarios, crucial dynamics, and leading segments of the Machine Learning Software market. Buyers of the report will have access to accurate PESTLE, SWOT, and other types of analysis on the Machine Learning Software market.

The Global Machine Learning Software Market is growing at a faster pace with substantial growth rates over the last few years and is estimated that the market will grow significantly in the forecasted period i.e. 2019 to 2026.

Machine Learning Software Market: A Competitive Perspective

Competition is a major subject in any market research analysis. With the help of the competitive analysis provided in the report, players can easily study key strategies adopted by leading players of the Machine Learning Software market. They will also be able to plan counterstrategies to gain a competitive advantage in the Machine Learning Software market. Major as well as emerging players of the Machine Learning Software market are closely studied taking into consideration their market share, production, revenue, sales growth, gross margin, product portfolio, and other significant factors. This will help players to become familiar with the moves of their toughest competitors in the Machine Learning Software market.

Machine Learning Software Market: Drivers and Limitations

The report section explains the various drivers and controls that have shaped the global market. The detailed analysis of many market drivers enables readers to get a clear overview of the market, including the market environment, government policy, product innovation, development and market risks.

The research report also identifies the creative opportunities, challenges, and challenges of the Machine Learning Software market. The framework of the information will help the reader identify and plan strategies for the potential. Our obstacles, challenges and market challenges also help readers understand how the company can prevent this.

Machine Learning Software Market: Segment Analysis

The segmental analysis section of the report includes a thorough research study on key type and application segments of the Machine Learning Software market. All of the segments considered for the study are analyzed in quite some detail on the basis of market share, growth rate, recent developments, technology, and other critical factors. The segmental analysis provided in the report will help players to identify high-growth segments of the Machine Learning Software market and clearly understand their growth journey.

Ask for Discount @ https://www.marketresearchintellect.com/ask-for-discount/?rid=173628&utm_source=NT&utm_medium=888

Machine Learning Software Market: Regional Analysis

This section of the report contains detailed information on the market in different regions. Each region offers a different market size because each state has different government policies and other factors. The regions included in the report are North America, Europe, Asia Pacific, the Middle East and Africa. Information about the different regions helps the reader to better understand the global market.

Table of Content

1 Introduction of Machine Learning Software Market

1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions

2 Executive Summary

3 Research Methodology of Market Research Intellect

3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources

4 Machine Learning Software Market Outlook

4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis

5 Machine Learning Software Market , By Deployment Model

5.1 Overview

6 Machine Learning Software Market , By Solution

6.1 Overview

7 Machine Learning Software Market , By Vertical

7.1 Overview

8 Machine Learning Software Market , By Geography

8.1 Overview8.2 North America8.2.1 U.S.8.2.2 Canada8.2.3 Mexico8.3 Europe8.3.1 Germany8.3.2 U.K.8.3.3 France8.3.4 Rest of Europe8.4 Asia Pacific8.4.1 China8.4.2 Japan8.4.3 India8.4.4 Rest of Asia Pacific8.5 Rest of the World8.5.1 Latin America8.5.2 Middle East

9 Machine Learning Software Market Competitive Landscape

9.1 Overview9.2 Company Market Ranking9.3 Key Development Strategies

10 Company Profiles

10.1.1 Overview10.1.2 Financial Performance10.1.3 Product Outlook10.1.4 Key Developments

11 Appendix

11.1 Related Research

Request Report Customization @ https://www.marketresearchintellect.com/product/global-machine-learning-software-market-size-forecast/?utm_source=NT&utm_medium=888

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.

Contact Us:

Mr. Steven FernandesMarket Research IntellectNew Jersey ( USA )Tel: +1-650-781-4080

Email: [emailprotected]

TAGS: Machine Learning Software Market Size, Machine Learning Software Market Growth, Machine Learning Software Market Forecast, Machine Learning Software Market Analysis, Machine Learning Software Market Trends, Machine Learning Software Market

Read the original here:
Machine Learning Software Market Increasing Demand with Leading Player, Comprehensive Analysis, Forecast to 2026 - News Times

Are Business Analysts Ready for the New Digital Era? – Grit Daily

Were now in the midst of theFourth Industrial Revolution,where humans work side-by-side with machines. This means business users must attain new digital skills to effectively supervise the huge influx of digital workers, driven by the rise of robotic process automation (RPA) software robots.

Its even more significant given that companies deploying these bots expect an increase by as much as 50 percent over the next two years, according to IDC in itsContent Intelligence: For the Future of Worksurvey.

The reason were seeing more software robots within the enterprise is that transformation initiatives are no longer solely owned by the IT department. Instead we are seeing organizations form a new Center of Excellence (COE) where multiple people within the organization are involved in the automation process, capturing C-Level visibility and engagement.These COEs are growing at a pace, with business analysts becoming central to the process of assessment and use of RPA tools to facilitate change.

Historically, AI technologies like machine learning have been difficult to incorporate, but now next generation applications are packaging AI technology in a way that is easy to train and consume in order to build and extend the digital workforce. But while teaching the new software robot no longer requires a developer in AI or machine learning, it does mean that business analysts will need to gain more skills to be proficient in process assessment methods, designing, training, deploying, and managing the new digital workforce.

A concerning 75 percent of global enterprises in IDCs Future of Work report said it was difficult to recruit people with new digital skills needed for transformation, and 20 percent cited inadequate worker training was a leading challenge. As we enter a new decade, business analysts with these higher skills are therefore crucial more than ever to adequately supervise and train digital workers.

So how can we prepare our workforce for the new digital era? There are two approaches: advocacy and access.

Lets be clear, the digital skills gap Im speaking of is not the transition we saw at the turn of the century where people traded their filing cabinets and typewriters for personal computers and Microsoft Word. Over the last decade enterprises have successfully transitioned to business process automation solutions in virtually every department from shipping, legal, accounts payable, payroll, human resources and recruiting, to sales and marketing and customer service. Todays workforce is proficient in using software for automation and collaboration.

Despite the advances in automation however, employees are often still performing manual work that falls outside of these systems especially when these processes involve unstructured content documents, images, text, and emails.

Now, with advances in AI, robots can be trained to carry out this manual work through the use of specific pre-packaged advanced skills. Also, by showing the bots how to perform a task, pointing out where they went wrong so they learn from their mistakes, they effectively gain human understanding such as thinking and reasoning so they become subject matter experts.

This is a key stage of the automation process and why it is important that business analysts understand how AI for content can be applied and incorporated into their procedures.

They will need access and knowledge to tools that can understand a business process and make recommendations. They can use these tools that apply AI to processing content but do it in such a way that does not require an advanced degree around machine learning and other AI technologies. This type of training and digital knowledge is imperative as companies move forward with automation. By doing this, it frees up more time for the employees to concentrate on more complex tasks or important business activities, like improving customer service. It is enabling and empowering more people in the organization, not just a few people with tribal knowledge who know the systems.

Take the role of compliance officers, for example. The challenge for these employees is sifting through the amount of documents and data associated with Know Your Customer (KYC) and Anti-Money Laundering (AML) checks as part of customer due diligence process requirements. Without performing thorough checks, banks are at serious risks at incurring hefty fines in the millions.

Many banks are using RPA as a first step to automate the collecting of documents and data, but still leave it up to the compliance officer to sift through documents and find the data that is relevant to their decisions.

By allowing robots to read the contracts and pick out relevant data, the compliance officer and the legal team can focus on the higher value work rather than manually inputting data into software or searching for key phrases. Digital skills will help a wide variety of professionals to augment and improve their work productivity from the legal team, HR, accounts payable, claims adjustors and more.

Equipping your workforce with new skills that complement the digital world of business today has mainly been industry driven. Some of the major RPA vendors such as Blue Prism and UiPath offer conferences, seminars and webinars that teach you how to train digital workers with Content Intelligence skills. Likewise, AI-enabling companies also offer the same resources for working with cognitive skills no matter which RPA platform you choose. Other resources are value-added resellers and integrated solution partners who will work with your automation team to deploy digital workers and train them to maximize their efficiency.

As organizations become more comfortable and proficient with training bots with specific cognitive skills, youll soon find internal marketplaces emerge within companies where departments and business units across the entire organization can share and access them.

Universities are also catching up to the speed of business and offering courses to equip the next generation of business and management graduates. There are notably two universities offering software robotics courses.California State University at Fullertons Mihaylo College of Business and Economicsis offering both graduate and undergraduate courses explaining the applications of RPA to drive efficiencies and improve performance in accounting. As part of a partnership with UiPath, the course features presentations and applied demonstrations from experts in the field including professionals from the Big Four accounting firms.

From the technology perspective,Carnegie Mellon Universitys Heinz CollegeSchools of Information Systems & Managementis offering an Advances in Robotic Process Information coursefor its Masters Program in the spring of 2020. Technology leaders will share their experiences and givestudentsaccess to the latest artificial intelligence and machine learning technology, such as RPA tools from Blue Prism and Content Intelligence skills from ABBYY.

With a new set of digital skills business analysts will heighten the level of automation in their company so staff can focus on more higher-value tasks that require emotional intelligence qualities such as judgement, discernment and empathy. Its a sophisticated balance between being efficient in their core profession, understanding the organizations needs and being digitally proficient to embrace the future of work. In turn,businessescan expect increased employee productivity and be on their way to have a better pulse on their overall Digital Intelligence having a complete understanding of how their business operates and to allocate resources and improve operations based on facts.

Related: Smart Tech is Changing Apartment Living

The piece Are Business Analysts Ready for the New Digital Era? by Bill Galusha first appeared on Innovation & Tech Today.

More here:
Are Business Analysts Ready for the New Digital Era? - Grit Daily

3 important trends in AI/ML you might be missing – VentureBeat

According to a Gartner survey, 48% of global CIOs will deploy AI by the end of 2020. However, despite all the optimism around AI and ML, I continue to be a little skeptical. In the near future, I dont foresee any real inventions that will lead to seismic shifts in productivity and the standard of living. Businesses waiting for major disruption in the AI/ML landscape will miss the smaller developments.

Here are some trends that may be going unnoticed at the moment but will have big long-term impacts:

Gone are the days when on-premises versus cloud was a hot topic of debate for enterprises. Today, even conservative organizations are talking cloud and open source. No wonder cloud platforms are revamping their offerings to include AI/ML services.

With ML solutions becoming more demanding in nature, the number of CPUs and RAM are no longer the only way to speed up or scale. More algorithms are being optimized for specific hardware than ever before be it GPUs, TPUs, or Wafer Scale Engines. This shift towards more specialized hardware to solve AI/ML problems will accelerate. Organizations will limit their use of CPUs to solve only the most basic problems. The risk of being obsolete will render generic compute infrastructure for ML/AI unviable. Thats reason enough for organizations to switch to cloud platforms.

The increase in specialized chips and hardware will also lead to incremental algorithm improvements leveraging the hardware. While new hardware/chips may allow use of AI/ML solutions that were earlier considered slow/impossible, a lot of the open-source tooling that currently powers the generic hardware needs to be rewritten to benefit from the newer chips. Recent examples of algorithm improvements include Sidewaysto speed up DL training by parallelizing the training steps, andReformerto optimize the use of memory and compute power.

I also foresee a gradual shift in the focus on data privacy towards privacy implications on ML models. A lot of emphasis has been placed on how and what data we gather and how we use it. But ML models are not true black boxes. It is possible to infer the model inputs based on outputs over time. This leads to privacy leakage. Challenges in data and model privacy will force organizations to embrace federated learningsolutions. Last year, Google releasedTensorFlow Privacy, a framework that works on the principle of differential privacy and the addition of noise to obscure inputs. With federated learning, a users data never leaves their device/machine. These machine learning models are smart enough and have a small enough memory footprint to run on smartphones and learn from the data locally.

Usually, the basis for asking for a users data was to personalize their individual experience. For example, Google Mail uses the individual users typing behavior to provide autosuggest. What about data/models that will help improve the experience not just for that individual but for a wider group of people? Would people be willing to share their trained model (not data) to benefit others? There is an interesting business opportunity here: paying users for model parameters that come from training on the data on their local device and using their local computing power to train models (for example, on their phone when it is relatively idle).

Currently, organizations are struggling to productionize models for scalability and reliability. The people who are writing the models are not necessarily experts on how to deploy them with model safety, security, and performance in mind. Once machine learning models become an integral part of mainstream and critical applications, this will inevitably lead to attacks on models similar to the denial-of-service attacks mainstream apps currently face. Weve already seen some low-tech examples of what this could look like: making a Tesla speed up instead of slow down, switch lanes, abruptly stop, or turning on wipers without proper triggers. Imagine the impacts such attacks could have on financial systems, healthcare equipment, etc. that rely heavily on AI/ML?

Currently, adversarial attacks are limited to academia to understand the implications of models better. But in the not too distant future, attacks on models will be for profit driven by your competitors who want to show they are somehow better, or by malicious hackers who may hold you to ransom. For example, new cybersecurity tools today rely on AI/ML to identify threats like network intrusions and viruses. What if I am able to trigger fake threats? What would be the costs associated with identifying real-vs-fake alerts?

To counter such threats, organizations need to put more emphasis on model verification to ensure robustness. Some organizations are already using adversarial networks to test deep neural networks. Today, we hire external experts to audit network security, physical security, etc. Similarly, we will see the emergence of a new market for model testing and model security experts, who will test, certify, and maybe take on some liability of model failure.

Organizations aspiring to drive value through their AI investments need to revisit the implications on their data pipelines. The trends Ive outlined above underscore the need for organizations to implement strong governance around their AI/ML solutions in production. Its too risky to assume your AI/ML models are robust, especially when theyre left to the mercy of platform providers. Therefore, the need of the hour is to have in-house experts who understand why models work or dont work. And thats one trend thats here to stay.

Sudharsan Rangarajan is Vice President of Engineering at Publicis Sapient.

Original post:
3 important trends in AI/ML you might be missing - VentureBeat