Top 3 Price Prediction Bitcoin, Ethereum, Ripple: Sell the rallies key theme ahead? – FXStreet

The worlds no. 1 digital coin, Bitcoin, is seen fading its tepid recovery from 2.5-week lows of 7,007, as we head towards the weekly closing. However, the second most traded cryptocurrency, Ethereum and Ripple, both cling to minor recovery gains so far this Sunday but the further upside lacks momentum, as sellers continue to lurk. The total market capitalization of the top 20 cryptocurrencies now stands at $195.25 billion, as cited by CoinMarketCap.

The top three coins are seen resuming last weeks downtrend into a fresh week ahead, with FXStreets Confluence Detector tool enabling to highlight key supports and resistances for better trading decisions.

As explained here, Bitcoinfailed to sustain it recovery near the $ 7,200 mark, as stiff resistances are aligned there, with the confluence of the previous high on the 4-hour chart and 23.6% Fibonacci Retracement (Fib) level of the weekly price action.

However, if the bulls manage to take out the last, the next resistance near the 7,265 region, the 23.6% Fib of the monthly price action. A break above which will expose the 10-day Simple Moving Average (DMA) at 7,332.

Given that the bears have returned, a test of the 2.5-week lows at 7,007 is back on sight. Note that the multi-week lows also intersect with the Pivot Point 1 Week S1 and Bollinger Band 1D lower, making it a critical demand zone. Should this support be breached, it is likely to accelerate the downside momentum towards 6,750 Pivot Point 1 Week S2.

Ethereumhas pared the recovery gains, as a pack of resistances just ahead of the 144 handle restricts its every upside attempt. The resistance zone is a confluence of the Fib 38.2% 1W, previous high on the 4-hour chart and Pivot Point 1D R1.

A sustained break above the last will intensify the recovery momentum towards the next resistance aligned near 147.50, where the 23.6% 1M and 61.8% 1W coincide.

To the downside, the earlier support around 143, the intersection of the 38.2% Fib 1D and previous low on the 15-minutes sticks, is already breached, opening floors for further declines towards the 140 handle the previous week low.

Rippleis seen consolidating around 0.2170 levels, as the immediate upside remains capped near the 0.2180 region (38.2% Fib 1D/ 5-HMA).

A break above that level, the coin is likely to test the days high at 0.2197 beyond which the 0.2220-0.2225 supply zone will grab buyers attention. That level is the key confluence of the 200-HMA, 38.2% Fib 1W and 100 4-hour SMA.

On the flip side, the next support is directly seen near 0.2157, which is the previous week low. Sellers are likely to aim for the minor support of the Pivot Point 1W S1 at 0.2135 if the bearish momentum picks up pace.

See all thecryptocurrency technical levels.

Read the original post:

Top 3 Price Prediction Bitcoin, Ethereum, Ripple: Sell the rallies key theme ahead? - FXStreet

Bitcoin Dips Below $7,000, is a Sharp Correction to $6000s Unavoidable? – U.Today

On major crypto exchanges like BitMEX, the bitcoin price briefly dipped below $7,000 for the first time since November 28.

Right before the daily close on December 14, the bitcoin price hit $6,994 on BitMEX and dropped to as low as $7,009 on Binance.

Prior to the drop, when the bitcoin price was hovering at around $7,200, technical analysts anticipated a rebound to key resistance levels.

The drop of the bitcoin price to below the $7,000 mark has put the dominant cryptocurrency vulnerable to a deeper pullback in the short-term.

Earlier this week, cryptocurrency trader Josh Rager said that while bitcoin is likely to stagnate in the imminent future, there are three scenarios in which the downside movement plays out.

Rather than pushing up one last time before moving down to the $6,000s, the bitcoin price essentially dropped straight down to the highly tested support level.

The low volume in the crypto exchange market as well as the double test of the mid-$6,000 support level indicates that a large move down is likely to be imminent.

Throughout the week, John Bollinger, the legendary creator of the Bollinger Bands, said that the technical indicator suggests a big move in the crypto market is coming.

Most crypto currencies are at or near Bollinger Band Squeeze levels. Time to pay attention, he said.

Whether that move is a relief rally to the upside to test stacked shorts on cryptocurrency exchanges or a continuous move down to lower level supports was uncertain.

If the bitcoin price settles below $7,000, the big move that has been anticipated by many traders and technical indicators is highly likely to be a sharp pullback.

DonAlt, for instance, said that he expects to see bitcoin in the $6,200 to $6,400 range in the short-term, a range that has plenty of trading activity on the larger time frames.

See more here:

Bitcoin Dips Below $7,000, is a Sharp Correction to $6000s Unavoidable? - U.Today

Bitcoin Cash ABC, EOS and Ethereum Daily Tech Analysis 15/12/19 – Yahoo Finance

Bitcoin Cash ABC

Bitcoin Cash ABC slid by 2.22% on Saturday. Reversing a 1.88% gain from Friday, Bitcoin Cash ABC ended the day at $205.92.

A bullish start to the day saw Bitcoin Cash ABC rally to an early morning intraday high $212.75 before hitting reverse.

Falling short of the first major resistance level at $213.94, Bitcoin Cash ABC slid to a late intraday low $205.09.

The sell-off saw Bitcoin Cash ABC fall through the first major support level at $207.01.

At the time of writing, Bitcoin Cash ABC was down by 0.40% to $205.09. Bitcoin Cash ABC eased back from Saturdays end of the day $205.92.

Bitcoin Cash ABC left the major support and resistance levels untested early on.

A move through to $208 levels would support a run at the first major resistance level at $210.75.

Support from the broader market would be needed, however, for Bitcoin to break back through to $210 levels.

Barring a broad-based crypto rally, the first major resistance level would likely pin Bitcoin Cash ABC back on the day.

Failure to move through to $208 levels could see Bitcoin Cash ABC struggle through the day.

A fall back through to $204 levels would bring the first major support level at $203.09 into play.

Barring a crypto meltdown, however, Bitcoin Cash ABC should steer clear of sub-$200 levels. The second major support level at $200.26 should limit any downside.

Major Support Level: $203.09

Major Resistance Level: $210.75

23.6% FIB Retracement Level: $269

38% FIB Retracement Level: $316

62% FIB Retracement Level: $393

EOS slid by 2.04% on Saturday. Reversing a 1.4% gain from Friday, EOS ended the day at $2.5783.

A bullish start to the day saw EOS rise to an early morning intraday high $2.6389 before hitting reverse.

Falling short of the first major resistance level at $2.6628, EOS slid to a late afternoon intraday low $2.5570.

The reversal saw EOS fall through the first major support level at $2.5908 before finding support.

In spite of a partial recovery late on, however, EOS failed to break back through the first major support level.

At the time of writing, EOS was down by 0.43% to $2.5672. A bearish start to the day saw EOS slide from an early morning high $2.5791 to an early morning low $2.5464.

In spite of the early moves, however, EOS left the major support and resistance levels untested.

Story continues

EOS would need to move through to $2.59 levels to support a run at the first major resistance level at $2.6256.

The broader market would need to provide support, however, for EOS to break back through to $2.60 levels.

Barring a broad-based crypto rebound, resistance at $2.60 would likely pin EOS back on the day.

Failure to move through to $2.59 levels could see EOS take another slide on the day.

A fall back through the early morning low $2.5464 would bring the first major support level at $2.5437 into play.

Barring a crypto meltdown, however, EOS should steer well clear of the second major support level at $2.5094.

Major Support Level: $2.5437

Major Resistance Level: $2.6256

23.6% FIB Retracement Level: $6.62

38% FIB Retracement Level: $9.76

62% FIB Retracement Level: $14.82

Ethereum fell by 2.05% on Saturday. Following a flat Friday, Ethereum ended the day at $141.73.

Tracking the broader market, Ethereum rose to an early morning intraday high $145.03 before hitting reverse.

Falling short of the first major resistance level at $145.63, Ethereum slid to a late afternoon intraday low $141.11.

Ethereum fell through the first major support level at $143.26 and the second major support level at $141.83.

A recovery to $142 levels was brief, with Ethereum falling back through the second major support level in the final hour.

At the time of writing, Ethereum was down by 0.54% to $140.97. A bearish start to the day saw Ethereum slide from an early morning high $141.90 to a low $139.80.

Steering clear of the major resistance levels, Ethereum fell through the first major support level at $140.22 before finding support.

Ethereum would need to move through to $142.60 level to support a run at the first major resistance level at $144.14.

Support from the broader market will be needed, however, for Ethereum to break back through to $143 levels.

Barring a broad-based crypto rally on the day, the first major resistance level would likely pin Ethereum back from $145 levels.

Failure to move through to $142.60 levels could see Ethereum take another hit on the day.

A fall back through the first major support level at $140.22 would bring the second major support level at $138.70 into play.

Barring an extended sell-off, however, Ethereum should steer clear of sub-$139 levels on the day.

Major Support Level: $140.22

Major Resistance Level: $144.14

23.6% FIB Retracement Level: $257

38.2% FIB Retracement Level: $367

62% FIB Retracement Level: $543

Please let us know what you think in the comments below.

Thanks, Bob

This article was originally posted on FX Empire

See the article here:

Bitcoin Cash ABC, EOS and Ethereum Daily Tech Analysis 15/12/19 - Yahoo Finance

Bitcoin: 5 Arrogant Myths That Just Wont Die (But Should!) – CCN.com

Bitcoins creation story borders on the mythological. A life-changing technology created by the pseudonymous, anonymous genius Satoshi Nakamoto, who then vanished out of sight.

With that in mind, is it any surprise that much of the bitcoin hopium continues to be fuelled by well-meaning fantasy?

Here are five Bitcoin myths that just wont die

Many people believe bitcoin will trigger a redistribution of wealth the likes of which the world has never seen. But maybe the reason the world has never seen such a thing is because theres no good reason for it to happen.

Bitcoiners would like to believe the reason much of the world still lives in poverty is due to a simple lack of the technical apparatus required to reverse it. The implication here is that were just as noble and decent as we need to be, but dont have the technology to put our decency into action.

The Pareto Distribution describes the tendency for natural inequality among the organisms on our planet. Just 20% of the trees in a forest gobble up 80% of the available nutrients from the surrounding earth. Likewise, 20% of the pea-pods in a garden produce 80% of the peas. The most desirable 20% of the men sleep with 80% of the women.

Numerous studies have shown that a majority of the worlds bitcoin is already concentrated in the hands of the few. This should not be a surprise.

The price of Bitcoin has become the gravitational center for the whole of what we call blockchain or cryptocurrency. Yet, if bitcoin really is to be the nature-correcting mechanism its proponents claim, its price relative to the U.S. dollar should be irrelevant.

The reason so much focus is given to the price of bitcoin is because, in truth, thats all anyone is really interested about. Bitcoins biggest use case to date has been its potential to increase (or decrease) the dollar holdings of its owners.

Now, the myth that price matters has become a reality. Enough capital has been staked on top of bitcoins future success that one false move could topple the entire enterprise. If bitcoins price retraces much further, the privilege of producing it will fall into even fewer hands.

Which leads to the next point

Most independent bitcoin miners were priced out of the game years ago. A single province in China is now home to 54% of the bitcoin mining hashpower. Only the largest mining firms were able to eat the losses incurred during bitcoins 2018 decline without going under.

Bitcoin was supposed to correct for human nature, but already we find ourselves in the humble position of hoping our Chinese overlords are benevolent.

Even the people dont believe bitcoin is for the people. Look at how many column inches are given daily to the fabled institutions and their Wall Street billionaires, who give us endless hope in the form of optimistic prognostications.

The technology underlying bitcoin truly is new, exciting and wrought with possibility. Its decentralized and self-governing nature gives it a robustness that other large digital networks dont have.

Crypto personality Nic Carter wrote in A most peaceful revolution, that crypto cannot be stopped:

Cryptocurrency, despite the earnest protests of some of its lily-livered adherents, remains manifestly independent and ultimately hostile to the State. It cannot be regulated, captured, or rendered compliant.

Carter later qualified the comment, adding:

cannot being a teleological statement, not a statement of possibility. of course, its possible that it gets neutered. but if it does, it loses its essence, its (sic) raison dtre.

In other words if it fails then it wasnt bitcoin to begin with.

The same ideological thinking was present at the birth of the internet. Given how that experiment turned out, can we agree that the essence of the internet is lost? Did it exist to begin with?

Ultimately, one need only look within Chinas borders to find a way in which bitcoin can be neutered. The Chinese government is unlikely to hasten the demise of its own sovereign currency any time soon. To assume they wont interfere in bitcoin, or havent already, would be folly.

New entrants to the cryptocurrency space often adopt the same helplessly saccharine tone about its potential to change the world.

Be careful what you wish for. The world is already changing thanks to the arrival of bitcoin, and its not overly apparent that its for the better.

Bitcoiners often proclaim that they dont want to take part in funding illegal wars by associating with the American dollar. Is that to say that funding human trafficking and drug-running is better?

No doubt most crime is still committed using cold, hard cash. But in the search for more operating funds, would you suggest that shady government entities havent already used cryptocurrency to boost their slush-funds? Why wouldnt they? Why couldnt they? Maybe weve been funding wars all along

Privacy isnt just something we should want for ourselves, its something we shouldnt want for those in power. Whether your favorite billionaire villains happen to be the Rothschilds or the Koch brothers they can use cryptocurrency too, and theyll be better at it than you.

This article was edited by Sam Bourgi.

Last modified: December 13, 2019 18:44 UTC

Go here to see the original:

Bitcoin: 5 Arrogant Myths That Just Wont Die (But Should!) - CCN.com

Should Coinbase fail, it would be the best thing for Bitcoins price: Trace Mayer – AMBCrypto

Trace Mayer the advocate for the Proof of Keys [PoK] movement appeared on Peter McCormacks recent podcast to talk about the importance of investors holding their private keys. Mayer noted of his first tweet about the PoK movement was on December 9, 2018, and after a few days after which Coinbase informed the users of testing the cold storage for which it moved around 850,000 bitcoins. This was the first time Coinbase made an announcement like that and Mayer and McCormack, unclear of the intention behind this step, speculated it was their proof of reserves.

With over 10 exchanges in 2019 alone, Mayer noted that none of the exchanges are too big to fail. Mayer said:

Like if Coinbase failed that would probably be one of the best things that could possibly happen for the Bitcoin price

Mayer claimed that the hacks like Quadriga CX or Mt. Gox helped Bitcoins price. In order to prove this, the PoK advocate took the example of Coinbase. Mayer explained that if Coinbase had 20 million customers, assuming each of them owns 0.1 of a bitcoin, that will amount to 2 million bitcoins. In a hypothetical scenario where Coinbase shuts shop and its customers lose their 2 million bitcoins, it will lead to a scarcity of an already scarce commodity. Thus, going by the logic of supply and demand, this hack could in-turn drive the price of Bitcoin for other holders.

Coinbase saved itself from one such incident in 2019, where it was targeted by a malware capable of taking over someones machine. However, the exchange acted expeditiously and discovered that it was a part of a sophisticated, highly targeted, thought out attack. that used spear-phishing/social engineering tactics, and two Firefox 0-day vulnerabilities. The security team was able to detect and block the attack before any harm was done.

Mayer noted with the above example of the vulnerabilities an exchange was capable of and how PoK could bring a financial discipline.

Go here to see the original:

Should Coinbase fail, it would be the best thing for Bitcoins price: Trace Mayer - AMBCrypto

D-Wave Announces Promotion of Dr. Alan Baratz to CEO – GlobeNewswire

BURNABY, British Columbia, Dec. 09, 2019 (GLOBE NEWSWIRE) -- D-Wave Systems Inc., the leader in quantum computing systems, software, and services, today announced that Dr. Alan Baratz will assume the role of chief executive officer (CEO), effective January 1, 2020. Baratz joined D-Wave in 2017 and currently serves as the chief product officer and executive vice president of research and development for D-Wave. He takes over from the retiring CEO, Vern Brownell.

Baratzs promotion to CEO follows the launch of Leap, D-Waves quantum cloud service, in October 2018, and comes in advance of the mid-2020 launch of the companys next-generation quantum system, Advantage.

Baratz has driven the development, delivery, and support of all of D-Waves products, technologies, and applications in recent years. He has over 25 years of experience in product development and bringing new products to market at leading technology companies and software startups. As the first president of JavaSoft at Sun Microsystems, Baratz oversaw the growth and adoption of the Java platform from its infancy to a robust platform supporting mission-critical applications in nearly 80 percent of Fortune 1000 companies. He has also held executive positions at Symphony, Avaya, Cisco, and IBM. He served as CEO and president of Versata, Zaplet, and NeoPath Networks, and as a managing director at Warburg Pincus LLC. Baratz holds a doctorate in computer science from the Massachusetts Institute of Technology.

I joined D-Wave to bring quantum computing technology to the enterprise. Now more than ever, I am convinced that making practical quantum computing available to forward-thinking businesses and emerging quantum developers through the cloud is central to jumpstarting the broad development of in-production quantum applications, said Baratz, chief product officer and head of research and development. As I assume the CEO role, Ill focus on expanding the early beachheads for quantum computing that exist in manufacturing, mobility, new materials creation, and financial services into real value for our customers. I am honored to take over the leadership of the company and work together with the D-Wave team as we begin to deliver real business results with our quantum computers.

The company also announced that CEO Vern Brownell has decided to retire at the end of the year in order to spend more time at his home in Boston with his family. Baratz will become CEO at that time. During Brownells tenure, D-Wave developed four generations of commercial quantum computers, raised over $170 million in venture funding, and secured its first customers, including Lockheed Martin, Google and NASA, and Los Alamos National Laboratory. Brownell will continue to serve as an advisor to the board.

There are very few moments in your life when you have the opportunity to build an entirely new market. My 10 years at D-Wave have been rich with breakthroughs, like selling the first commercial quantum computer. I am humbled to have been a part of building the quantum ecosystem, said Brownell, retiring D-Wave CEO. Alan has shown tremendous leadership in our technology and product development efforts, and I am working with him to transition leadership of the entire business. This is an exciting time for quantum computing and an exciting time for D-Wave. I cant imagine a better leader than Alan at the helm for the next phase of bringing practical quantum computing to enterprises around the world.

With cloud access and the development of more than 200 early applications, quantum computing is experiencing explosive growth. We are excited to recognize Alans work in bringing Leap to market and building the next-generation Advantage system. And as D-Wave expands their Quantum-as-a-Service offerings, Alans expertise with growing developer communities and delivering SaaS solutions to enterprises will be critical for D-Waves success in the market, said Paul Lee, D-Wave board chair. I want to thank Vern for his 10 years of contributions to D-Wave. He was central in our ability to be the first to commercialize quantum computers and has made important contributions not only to D-Wave, but also in building the quantum ecosystem.

About D-Wave Systems Inc.D-Wave is the leader in the development and delivery of quantum computing systems, software, and services and is the worlds first commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing for the world. We do this by delivering customer value with practical quantum applications for problems as diverse as logistics, artificial intelligence, materials sciences, drug discovery, cybersecurity, fault detection, and financial modeling. D-Waves systems are being used by some of the worlds most advanced organizations, including Volkswagen, DENSO, Lockheed Martin, USRA, USC, Los Alamos National Laboratory, and Oak Ridge National Laboratory. With headquarters near Vancouver, Canada, D-Waves US operations are based in Palo Alto, CA and Bellevue, WA. D-Wave has a blue-chip investor base including PSP Investments, Goldman Sachs, BDC Capital, DFJ, In-Q-Tel, BDC Capital, PenderFund Capital, 180 Degree Capital Corp., and Kensington Capital Partners Limited. For more information, visit: http://www.dwavesys.com.

Contact D-Wave Systems Inc.dwave@launchsquad.com

View post:

D-Wave Announces Promotion of Dr. Alan Baratz to CEO - GlobeNewswire

Theres No Such Thing As The Machine Learning Platform – Forbes

In the past few years, you might have noticed the increasing pace at which vendors are rolling out platforms that serve the AI ecosystem, namely addressing data science and machine learning (ML) needs. The Data Science Platform and Machine Learning Platform are at the front lines of the battle for the mind share and wallets of data scientists, ML project managers, and others that manage AI projects and initiatives. If youre a major technology vendor and you dont have some sort of big play in the AI space, then you risk rapidly becoming irrelevant. But what exactly are these platforms and why is there such an intense market share grab going on?

The core of this insight is the realization that ML and data science projects are nothing like typical application or hardware development projects. Whereas in the past hardware and software development aimed to focus on the functionality of systems or applications, data science and ML projects are really about managing data, continuously evolving learning gleaned from data, and the evolution of data models based on constant iteration. Typical development processes and platforms simply dont work from a data-centric perspective.

It should be no surprise then that technology vendors of all sizes are focused on developing platforms that data scientists and ML project managers will depend on to develop, run, operate, and manage their ongoing data models for the enterprise. To these vendors, the ML platform of the future is like the operating system or cloud environment or mobile development platform of the past and present. If you can dominate market share for data science / ML platforms, you will reap rewards for decades to come. As a result, everyone with a dog in this fight is fighting to own a piece of this market.

However, what does a Machine Learning platform look like? How is it the same or different than a Data Science platform? What are the core requirements for ML Platforms, and how do they differ from more general data science platforms? Who are the users of these platforms, and what do they really want? Lets dive deeper.

What is the Data Science Platform?

Data scientists are tasked with wrangling useful information from a sea of data and translating business and operational informational needs into the language of data and math. Data scientists need to be masters of statistics, probability, mathematics, and algorithms that help to glean useful insights from huge piles of information. A data scientist creates data hypothesis, runs tests and analysis of the data, and then translates their results for someone else in the organization to easily view and understand. So it follows that a pure data science platform would meet the needs of helping craft data models, determining the best fit of information to a hypothesis, testing that hypothesis, facilitating collaboration amongst teams of data scientists, and helping to manage and evolve the data model as information continues to change.

Furthermore, data scientists dont focus their work in code-centric Integrated Development Environments (IDEs), but rather in notebooks. First popularized by academically-oriented math-centric platforms like Mathematica and Matlab, but now prominent in the Python, R, and SAS communities, notebooks are used to document data research and simplify reproducibility of results by allowing the notebook to run on different source data. The best notebooks are shared, collaborative environments where groups of data scientists can work together and iterate models over constantly evolving data sets. While notebooks dont make great environments for developing code, they make great environments to collaborate, explore, and visualize data. Indeed, the best notebooks are used by data scientists to quickly explore large data sets, assuming sufficient access to clean data.

However, data scientists cant perform their jobs effectively without access to large volumes of clean data. Extracting, cleaning, and moving data is not really the role of a data scientist, but rather that of a data engineer. Data engineers are challenged with the task of taking data from a wide range of systems in structured and unstructured formats, and data which is usually not clean, with missing fields, mismatched data types, and other data-related issues. In this way, the role of a data engineer is an engineer who designs, builds and arranges data. Good data science platforms also enable data scientists to easily leverage compute power as their needs grow. Instead of copying data sets to a local computer to work on them, platforms allow data scientists to easily access compute power and data sets with minimal hassle. A data science platform is challenged with the needs to provide these data engineering capabilities as well. As such, a practical data science platform will have elements of data science capabilities and necessary data engineering functionality.

What is the Machine Learning Platform?

We just spent several paragraphs talking about data science platforms and not even once mentioned AI or ML. Of course, the overlap is the use of data science techniques and machine learning algorithms applied to the large sets of data for the development of machine learning models. The tools that data scientists use on a daily basis have significant overlap with the tools used by ML-focused scientists and engineers. However, these tools arent the same, because the needs of ML scientists and engineers are not the same as more general data scientists and engineers.

Rather than just focusing on notebooks and the ecosystem to manage and work collaboratively with others on those notebooks, those tasked with managing ML projects need access to the range of ML-specific algorithms, libraries, and infrastructure to train those algorithms over large and evolving datasets. An ideal ML platforms helps ML engineers, data scientists, and engineers discover which machine learning approaches work best, how to tune hyperparameters, deploy compute-intensive ML training across on-premise or cloud-based CPU, GPU, and/or TPU clusters, and provide an ecosystem for managing and monitoring both unsupervised as well as supervised modes of training.

Clearly a collaborative, interactive, visual system for developing and managing ML models in a data science platform is necessary, but its not sufficient for an ML platform. As hinted above, one of the more challenging parts of making ML systems work is the setting and tuning of hyperparameters. The whole concept of a machine learning model is that it requires various parameters to be learned from the data. Basically, what machine learning is actually learning are the parameters of the data, and fitting new data to that learned model. Hyperparameters are configurable data values that are set prior to training an ML model that cant be learned from data. These hyperparameters indicate various factors such as complexity, speed of learning, and more. Different ML algorithms require different hyperparameters, and some dont need any at all. ML platforms help with the discovery, setting, and management of hyperparameters, among other things including algorithm selection and comparison that non-ML specific data science platforms dont provide.

The different needs of big data, ML engineering, model management, operationalization

At the end of the day, ML project managers simply want tools to make their jobs more efficient and effective. But not all ML projects are the same. Some are focused on conversational systems, while others are focused on recognition or predictive analytics. Yet others are focused on reinforcement learning or autonomous systems. Furthermore, these models can be deployed (or operationalized) in various different ways. Some models might reside in the cloud or on-premise servers while others are deployed to edge devices or offline batch modes. These differences in ML application, deployment, and needs between data scientists, engineers, and ML developers makes the concept of a single ML platform not particularly feasible. It would be a jack of all trades and master of none.''

As such, we see four different platforms emerging. One focused on the needs of data scientists and model builders, another focused on big data management and data engineering, yet another focused on model scaffolding and building systems to interact with models, and a fourth focused on managing the model lifecycle - ML Ops. The winners will focus on building out capabilities for each of these parts.

The Four Environments of AI (Source: Cognilytica)

The winners in the data science platform race will be the ones that simplify ML model creation, training, and iteration. They will make it quick and easy for companies to move from dumb unintelligent systems to ones that leverage the power of ML to solve problems that previously could not be addressed by machines. Data science platforms that dont enable ML capabilities will be relegated to non-ML data science tasks. Likewise, those big data platforms that inherently enable data engineering capabilities will be winners. Similarly, application development tools will need to treat machine learning models as first-class participants in their lifecycle just like any other form of technology asset. Finally, the space of ML operations (ML Ops) is just now emerging and will no doubt be big news in the next few years.

When a vendor tells you they have an AI or ML platform, the right response is to say which one?. As you can see, there isnt just one ML platform, but rather different ones that serve very different needs. Make sure you dont get caught up in the marketing hype of some of these vendors with what they say they have with what they actually have.

Excerpt from:

Theres No Such Thing As The Machine Learning Platform - Forbes

Israelis develop ‘self-healing’ cars powered by machine learning and AI – The Jerusalem Post

Even before autonomous vehicles become a regular sight on our streets, modern cars are quickly resembling sophisticated computers on wheels.Increasingly connected vehicles come with as many as 150 million lines of code, far exceeding the 145,000 lines of code required to land Apollo 11 on the Moon in 1969. Self-driving cars could require up to one billion lines of code.For manufacturers, passengers and repair shops alike, vehicles running on software rather than just machines represent an unprecedented world of highly complex mobility. Checking the engine, tires and brakes to find a fault will certainly no longer suffice.Seeking to build trust in the new generation of automotive innovation, Tel Aviv-based start-up Aurora Labs has developed software for what it calls the self-healing car a proactive and remote system to detect and fix potential vehicle malfunctions, and update and validate in-car software without any downtime.(From left) Aurora Labs co-founder & CEO Zohar Fox; co-founder & COO Ori Lederman; and EVP Marketing Roger Ordman (Credit: Aurora Labs)The automotive industry is facing its biggest revolution to date, Aurora Labs co-founder and chief operating officer Ori Lederman told The Jerusalem Post. The most critical aspect of all that sophistication and software coming into the car is whether you can trust it, even before you hand over complete autonomy to the car. It poses a lot of challenges to car-makers.New challenges, Lederman added, include whether software problems can be detected after selling the vehicle, whether problems can be solved safely and securely, and whether defects can be solved without interrupting car use. In 2018, some eight million vehicles were recalled in the United States due to software-based defects alone.The human body can detect when something is not quite right before you pass out, said executive vice president of marketing Roger Ordman. The auto-immune system indicates something is wrong and what can be done to fix it: raise your temperature or white blood count. Sometimes the body can do a self-fix, and sometimes thats not enough and needs an external intervention.Our technology has the same kind of approach detecting if something has started to go wrong before it causes a catastrophic failure, indicating exactly where that problem is, doing something to fix it, and keeping it running smoothly.The companys Line-Of-Code Behavior technology, powered by machine learning and artificial intelligence, creates a deep understanding of what software is installed on over 100 vehicle Engine Control Units (ECU), and the relationship between them. In addition to detecting software faults, the technology can enable remote, over-the-air software updates without any downtime.Similar to silent updates automatically implemented by smartphone applications, Ordman added, car manufacturers will be able to update and continuously improve software running on connected vehicles. Of course, manufacturers will be required to meet stringent regulations, developed by bodies including the UNECE, concerning cybersecurity and over-the-air updates.When we joined forces and started developing the idea, we knew our technology was applicable to any connected, smart device or Internet of Things device, said Lederman. The first vertical we wanted to start with is the one that needs us the most, and the biggest market. The need for detecting, managing, recovering and being transparent about software is by far the largest need in the automotive industry as they move from mechanical parts to virtual systems run by lines of code.Rather than requiring mass recalls, Aurora Labs self-healing software will be able to apply short-term fixes to ensure continued functionality and predictability, and subsequently implement comprehensive upgrades to the vehicles systems.The company, which has raised $11.5 million in fund-raising rounds since it was founded in 2016 by Lederman and CEO Zohar Fox, is currently working to implement its technology with some of the worlds leading automotive industry players, including major car-makers in Germany, the United States, Korea and Japan.The fast-growing start-up also has offices in Michigan and the North Macedonian capital of Skopje, and owns a subsidiary near Munich.Customers ought to start being aware of how sophisticated their cars are, said Lederman. When they buy a new car, they should want to ask the dealership that they have the ability to detect, fix and recover so they dont need to go the dealership. Its something they would want to have. Just as the safety performance of cars in Europe are ranked according to the five-star NCAP standard, Ordman believes there should be an additional star for software safety and security.There should be as many self-healing systems in place as possible to enable that, when inevitably something does go wrong, there are systems in place to detect and fix them and maintain uptime, said Ordman.Does the software running in the vehicle have the right cybersecurity in place? Does it have right recovery technologies in place? Can it continuously and safely improve over time?With these functionalities, youre not just dealing with five stars of the physical but adding another star for the software safety and security. It is about giving the trust to the consumer. Im getting a car that will safeguard me and my family as I move forward.

View post:

Israelis develop 'self-healing' cars powered by machine learning and AI - The Jerusalem Post

The challenge in Deep Learning is to sustain the current pace of innovation, explains Ivan Vasilev, machine learning engineer – Packt Hub

If we talk about recent breakthroughs in the software community, machine learning and deep learning is a major contender the usage, adoption, and experimentation of deep learning has exponentially increased. Especially in the areas of computer vision, speech, natural language processing and understanding, deep learning has made unprecedented progress. GANs, variational autoencoders and deep reinforcement learning are also creating impressive AI results.

To know more about the progress of deep learning, we interviewed Ivan Vasilev, a machine learning engineer and researcher based in Bulgaria. Ivan is also the author of the book Advanced Deep Learning with Python. In this book, he teaches advanced deep learning topics like attention mechanism, meta-learning, graph neural networks, memory augmented neural networks, and more using the Python ecosystem. In this interview, he shares his experiences working on this book, compares TensorFlow and PyTorch, as well as talks about computer vision, NLP, and GANs.

Computer Vision and Natural Language processing are two popular areas where a number of developments are ongoing. In his book, Advanced Deep Learning with Python, Ivan delves deep into these two broad application areas. One of the reasons I emphasized computer vision and NLP, he clarifies, is that these fields have a broad range of real-world commercial applications, which makes them interesting for a large number of people.

The other reason for focusing on Computer Vision, he says is because of the natural (or human-driven if you wish) progress of deep learning. One of the first modern breakthroughs was in 2012, when a solution based on convolutional network won the ImageNet competition of that year with a large margin compared to any previous algorithms. Thanks in part to this impressive result, the interest in the field was renewed and brought many other advances including solving complex tasks like object detection and new generative models like generative adversarial networks. In parallel, the NLP domain saw its own wave of innovation with things like word vector embeddings and the attention mechanism.

There are two popular machine learning frameworks that are currently at par TensorFlow and PyTorch (Both had new releases in the past month, TensorFlow 2.0 and PyTorch 1.3). There is an ongoing debate that pitches TensorFlow and PyTorch as rivaling tech and communities. Ivan does not think there is a clear winner between the two libraries and this is why he has included them both in the book.

He explains, On the one hand, it seems that the API of PyTorch is more streamlined and the library is more popular with the academic community. On the other hand, TensorFlow seems to have better cloud support and enterprise features. In any case, developers will only benefit from the competition. For example, PyTorch has demonstrated the importance of eager execution and TensorFlow 2.0 now has much better support for eager execution to the point that it is enabled by default. In the past, TensorFlow had internal competing APIs, whereas now Keras is promoted as its main high-level API. On the other hand, PyTorch 1.3 has introduced experimental support for iOS and Android devices and quantization (computation operations with reduced precision for increased efficiency).

Ivan discusses his venture into the field of financial machine learning, being the author of an ML-oriented event-based algorithmic trading library. However, financial machine learning (and stock price prediction in particular) is usually not in the focus of mainstream deep learning research. One reason, Ivan states, is that the field isnt as appealing as, say, computer vision or NLP. At first glance, it might even appear gimmicky to predict stock prices.

He adds, Another reason is that quality training data isnt freely available and can be quite expensive to obtain. Even if you have such data, pre-processing it in an ML-friendly way is not a straightforward process, because the noise-to-signal ratio is a lot higher compared to images or text. Additionally, the data itself could have huge volume.

However, he counters, using ML in finance could have benefits, besides the obvious (getting rich by trading stocks). The participation of ML algorithms in the stock trading process can make the markets more efficient. This efficiency will make it harder for market imbalances to stay unnoticed for long periods of time. Such imbalances will be corrected early, thus preventing painful market corrections, which could otherwise lead to economic recessions.

Ivan has also given a special emphasis to Generative adversarial networks in his book. Although extremely useful, in recent times GANs have been used to generate high-dimensional fake data that look very convincing. Many researchers and developers have raised concerns about the negative repercussions of using GANs and wondered if it is even possible to prevent and counter its misuse/abuse.

Ivan acknowledges that GANs may have unintended outcomes but that shouldnt be the sole reason to discard them. He says, Besides great entertainment value, GANs have some very useful applications and could help us better understand the inner workings of neural networks. But as you mentioned, they can be used for nefarious purposes as well. Still, we shouldnt discard GANs (or any algorithm with similar purpose) because of this. If only because the bad actors wont discard them. I think the solution to this problem lies beyond the realm of deep learning. We should strive to educate the public on the possible adverse effects of these algorithms, but also to their benefits. In this way we can raise the awareness of machine learning and spark an honest debate about its role in our society.

Awareness and Ethics go in parallel. Ethics is one of the most important topics to emerge in machine learning and artificial intelligence over the last year. Ivan agrees that the ethics and algorithmic bias in machine learning are of extreme importance. He says, We can view the potential harmful effects of machine learning as either intentional and unintentional. For example, the bad actors I mentioned when we discussed GANs fall into the intentional category. We can limit their influence by striving to keep the cutting edge of ML research publicly available, thus denying them any unfair advantage of potentially better algorithms. Fortunately, this is largely the case now and hopefully will remain that way in the future.

I dont think algorithmic bias is necessarily intentional, he says. Instead, I believe that it is the result of the underlying injustices in our society, which creep into ML through either skewed training datasets or unconscious bias of the researchers. Although the bias might not be intentional, we still have a responsibility to put a conscious effort to eliminate it.

The field of ML exploded (in a good sense) a few years ago, says Ivan, thanks to a combination of algorithmic and computer hardware advances. Since then, the researches have introduced new smarter and more elegant deep learning algorithms. But history has shown that AI can generate such a great hype that even the impressive achievements of the last few years could fall short of the expectations of the general public.

So, in a broader sense, the challenge in front of ML is to sustain the current pace of innovation. In particular, current deep learning algorithms fall short in some key intelligence areas, where humans excel. For example, neural networks have a hard time learning multiple unrelated tasks. They also tend to perform better when working with unstructured data (like images), compared to structured data (like graphs).

Another issue is that neural networks sometimes struggle to remember long-distance dependencies in sequential data. Solving these problems might require new fundamental breakthroughs, and its hard to give an estimation of such one time events. But even at the current level, ML can fundamentally change our society (hopefully for the better). For instance, in the next 5 to 10 years, we can see the widespread introduction of fully autonomous vehicles, which have the potential to transform our lives.

This is just a snapshot of some of the important focus areas in the deep learning ecosystem. You can check out more of Ivans work in his book Advanced Deep Learning with Python. In this book you will investigate and train CNN models with GPU accelerated libraries like TensorFlow and PyTorch. You will also apply deep neural networks to state-of-the-art domains like computer vision problems, NLP, GANs, and more.

Ivan Vasilev started working on the first open source Java Deep Learning library with GPU support in 2013. The library was acquired by a German company, where he continued its development. He has also worked as a machine learning engineer and researcher in the area of medical image classification and segmentation with deep neural networks. Since 2017 he has focused on financial machine learning. He is working on a Python based platform, which provides the infrastructure to rapidly experiment with different ML algorithms for algorithmic trading. You can find him on Linkedin and GitHub.

Kaggles Rachel Tatman on what to do when applying deep learning is overkill

Brad Miro talks TensorFlow 2.0 features and how Google is using it internally

Franois Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in deep learning and more

The rest is here:

The challenge in Deep Learning is to sustain the current pace of innovation, explains Ivan Vasilev, machine learning engineer - Packt Hub

The Afghanistan papers: The criminality and disaster of a war based upon lies – World Socialist Web Site

The Afghanistan papers: The criminality and disaster of a war based upon lies 10 December 2019

The publication Monday by the Washington Post of interviews with senior US officials and military commanders on the nearly two-decades-old US war in Afghanistan has provided a damning indictment of both the criminality and abject failure of an imperialist intervention conducted on the basis of lies.

The Post obtained the raw interviews after a three-year Freedom of Information Act court battle. While initially they were not secret, the Obama administration moved to classify the documents after the newspaper sought to obtain them.

The interviews were conducted between 2014 and 2018 in a Lessons Learned project initiated by the office of the Special Inspector General for Afghanistan Reconstruction (SIGAR). The project was designed to review the failures of the Afghanistan intervention with the aim of preventing their repetition the next time US imperialism seeks to carry out an illegal invasion and occupation of an oppressed country.

SIGARs director, John Sopko, freely admitted to the Post that the interviews provided irrefutable evidence that the American people have constantly been lied to about the war in Afghanistan.

What emerges from the interviews, conducted with more than 400 US military officers, special forces operatives, officials from the US Agency for International Development (USAID) and senior advisers to both US commanders in Afghanistan and the White House, is an overriding sense of failure tinged with bitterness and cynicism. Those who participated had no expectation that their words would be made public.

Douglas Lute, a retired Army lieutenant general who served as the Afghanistan war czar under the administrations of both George W. Bush and Barack Obama, told his government interviewers in 2015, If the American people knew the magnitude of this dysfunction... 2,400 [American] lives lost. Who will say this war was in vain?

Stephen Hadley, the White House national security adviser under Bush, was even more explicit in his admission of US imperialisms debacle in Afghanistanand elsewhere. He told his SIGAR interviewers that Washington had no post-stabilization model that works, adding that this had been proven not only in Afghanistan, but in Iraq as well. Every time we have one of these things, it is a pickup game. I dont have any confidence that if we did it again, we would do any better.

Ryan Crocker, who served as Washingtons senior man in Kabul under both Bush and Obama, told SIGAR that Our biggest single project, sadly and inadvertently, of course, may have been the development of mass corruption. Once it gets to the level I saw, when I was out there, its somewhere between unbelievably hard and outright impossible to fix it.

This corruption was fed by vast expenditures on the part of the US government on Afghanistans supposed reconstruction$133 billion, more than Washington spent, adjusted for inflation, on the entire Marshal Plan for the reconstruction of Western Europe after the Second World War. As the interviews make clear, this money went largely into the pockets of corrupt Afghan politicians and contractors and to fund projects that were neither needed nor wanted by the Afghan people.

The US National Endowment for Democracys former senior program officer for Afghanistan told his interviewers that Afghans with whom he had worked were in favor of a socialist or communist approach because thats how they remembered things the last time the system worked, i.e., before the 1980s CIA-backed Islamist insurgency that toppled a Soviet-backed government and unleashed a protracted civil war that claimed the lives of over a million. He also blamed the failure of US reconstruction efforts on a dogmatic adherence to free-market principles.

An Army colonel who advised three top US commanders in Afghanistan told the interviewers that, by 2006, the US-backed puppet government in Kabul had self-organized into a kleptocracy.

US military personnel engaged in what has supposedly been a core mission of training Afghan security forces to be able to fight on their own to defend the corrupt US-backed regime in Kabul were scathing in their assessments.

A special forces officer told interviewers that the Afghan police whom his troops had trained were awfulthe bottom of the barrel in the country that is already at the bottom of the barrel, estimating that one third of the recruits were drug addicts or Taliban. Another US adviser said that the Afghans that he worked with reeked of jet fuel because they were constantly smuggling it out of the base to sell on the black market.

Faced with the continuing failure of its attempts to quell the insurgency in Afghanistan and create a viable US-backed regime and army, US officials lied. Every president and his top military commanders, from Bush to Obama to Trump, insisted that progress was being made and the US was winning the war, or, as Trump put it during his lightning Thanksgiving trip in and out of Afghanistan, was victorious on the battlefield.

The liars in the White House and the Pentagon demanded supporting lies from those on the ground in Afghanistan. Surveys, for instance, were totally unreliable, but reinforced that everything we were doing was right and we became a self-licking ice cream cone, an Army counterinsurgency adviser to the Afghanistan commanders told SIGAR.

A National Security Council official explained that every reversal was spun into a sign of progress: For example, attacks are getting worse? Thats because there are more targets for them to fire at, so more attacks are a false indicator of instability. Then, three months later, attacks are still getting worse? Its because the Taliban are getting desperate, so its actually an indicator that were winning. The purpose of these lies was to justify the continued deployment of US troops and the continued carnage in Afghanistan.

Today, the carnage is only escalating. According to the United Nations, last year 3,804 Afghan civilians were killed in the war, the highest number since the UN began counting casualties over a decade ago. US airstrikes have also been rising to an all-time high, killing 579 civilians in the first 10 months of this year, a third more than in 2018.

The lies exposed by the SIGAR interviews have been echoed by a pliant corporate media that has paid scant attention to the longest war in US history. The most extensive exposure of US war crimes in Afghanistan came in 2010, based on some 91,000 secret documents provided by the courageous US Army whistleblower Chelsea Manning to WikiLeaks. Julian Assange, the founder of WikiLeaks, is now being held in Britains maximum security Belmarsh Prison facing extradition to the United States on Espionage Act charges that carry a penalty of life imprisonment or worse for the crime of exposing these war crimes. Manning is herself imprisoned in US Federal detention center in Virginia for refusing to testify against Assange.

On October 9, 2001, two days after Washington launched its now 18-year-long war on Afghanistan and amid a furor of war propaganda from the US government and the corporate media, the World Socialist Web Site posted a statement titled Why we oppose the war in Afghanistan. It exposed the lie that this was a war for justice and the security of the American people against terrorism and insisted that the present action by the United States is an imperialist war in which Washington aimed to establish a new political framework within which it will exert hegemonic control over not only Afghanistan, but over the broader region of Central Asia, home to the second largest deposit of proven reserves of petroleum and natural gas in the world.

The WSWS stated at the time: The United States stands at a turning point. The government admits it has embarked on a war of indefinite scale and duration. What is taking place is the militarization of American society under conditions of a deepening social crisis.

The war will profoundly affect the conditions of the American and international working class. Imperialism threatens mankind at the beginning of the twenty-first century with a repetition on a more horrific scale of the tragedies of the twentieth. More than ever, imperialism and its depredations raise the necessity for the international unity of the working class and the struggle for socialism.

These warnings have been borne out entirely by the criminal and tragic events of the last 18 years, even as the Washington Post now finds itself compelled to admit the bankruptcy of the entire sordid intervention in Afghanistan that it previously supported.

The US debacle in Afghanistan is only the antechamber of a far more dangerous eruption of US militarism, as Washington shifts its global strategy from the war on terrorism to preparation for war against its great power rivals, in the first instance, nuclear-armed China and Russia.

Opposition to war and the defense of democratic rightsposed most sharply in the fight for the freedom of Julian Assange and Chelsea Manningmust be guided by a global strategy that consciously links this fight to the growing eruption of social struggles of the international working class against capitalist exploitation and political oppression.

Bill Van Auken

2019 has been a year of mass social upheaval. We need you to help the WSWS and ICFI make 2020 the year of international socialist revival. We must expand our work and our influence in the international working class. If you agree, donate today. Thank you.

Read this article:
The Afghanistan papers: The criminality and disaster of a war based upon lies - World Socialist Web Site