Bitcoin Price Analysis: Following Decent $300 Surge Is Bitcoin Ready To Conquer $10,000 Again, Or Just A Temp Correction? – CryptoPotato

Following the huge price dump last Wednesday, we saw Bitcoin trading in the tight range between $9550 and $9750 until a few hours ago, where the primary cryptocurrency had decided to fire some engines towards a critical resistance level.

As of writing these lines, Bitcoin is testing the $9900 $10,000 resistance. As mentioned in our previous analysis, the $9900 horizontal resistance is also the Golden Fib retracement level (61.8%, lies at $9922). As can be seen on the following daily chart, this resistance is also a retest of the mid-term ascending trend-line (marked yellow).

While Bitcoin is in the middle of another weekend, we need to keep in mind a possible CME Futures gap waiting at $9830. Those gaps usually tend to get filled very quickly.

Total Market Cap: $286 billion

Bitcoin Market Cap: $179.6 billion

BTC Dominance Index: 62.7%

*Data by CoinGecko

Support/Resistance levels: As mentioned above, the first level of resistance is the Golden Fib level (61.8%) at $9922, before the $10,000 benchmark. Higher above lies the past weeks high of $10,200 $10,300, followed by $10,500 where lies the current 2020 high.

From below, the first significant level of support lies at $9750. Further below lies $9550, followed by $9400, before the weekly low of Wednesday, which lies at $9300.

The RSI Indicator: After a huge drop to the 50 RSI levels, the indicator found the needed support, and since then showing bullishness.

On the 4-hour chart (the lower time-frame), we can see a little bit of bullish price divergence starting to develop, which could be fuel for the next move up.

In addition, the Stochastic RSI oscillator had made a crossover in the oversold territory, and now about to enter the neutral zone, this could be another short-term bullish sign.

Trading volume: Since Wednesday, we can see that the daily volume candles are declining. Yesterday had carried a minor amount of volume. This might be a sign that the next Bitcoin move is coming up. As a reminder, during weekends, the trading volume tends to be lower.

Enjoy reading? Please share:

Click here to start trading on BitMEX and receive 10% discount on fees for 6 months.

Disclaimer: Information found on CryptoPotato is those of writers quoted. It does not represent the opinions of CryptoPotato on whether to buy, sell, or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk. See Disclaimer for more information.

Cryptocurrency chartsby TradingView.

Original post:
Bitcoin Price Analysis: Following Decent $300 Surge Is Bitcoin Ready To Conquer $10,000 Again, Or Just A Temp Correction? - CryptoPotato

Bitcoin (BTC) Funds Are Placing More Shorts, Will This Fuel A New Rally? – U.Today

Data from researchers at Skew shows funds have been increasing short exposure throughout the past several weeks as the bitcoin price surged past $9,000 to hit a yearly high at $10,500.

Leveraged funds increasing short exposure week after week. Cash and carry strategies or outright shorts? said Skew.

Short-term movements in the cryptocurrency market are typically swayed by short and long contract liquidations.

During a short squeeze, sellers that expect the bitcoin price to go down begin to panic close or adjust their positions if the bitcoin price starts to go up.

While liquidations on platforms like Bitfinex and CME are significantly lower than exchanges like BitMEX and Binance Futures that offer up to 125x leverage, an abrupt bitcoin spike still causes short sellers to control their positions.

When that happens, as seen with short squeezes in the equities market like with Tesla, it adds powerful short-term momentum to an asset.

It remains unclear whether the bitcoin short exposure on CME represents hedge positions by investors who also expect the bitcoin price to go up and hold a net long position.

For instance, many whales or individuals with large amounts of bitcoin like to place hedge shorts in an event the price of BTC corrects significantly in the near-term. But, the whales have net long positions in place.

Considering that CME tailors to accredited and institutional investors rather than retail traders, it is highly likely that the majority of the short exposure are simply hedge positions against the market.

Even then, if the bitcoin price starts to show signs of a bullish market continuation by reclaiming the $10,500 yearly high, it could convert short positions to market buy orders in a squeeze, adding buying demand in the market.

The bitcoin price was at risk of a steep pullback as it dropped to as low as $9,350 on February 20. However, the dip was quickly bought up, prompting analysts to categorize it as a liquidity fill.

A heavy support level at $9,550 has been described as a key level that prevents BTC from seeing a sharp correction in the upcoming weeks if defended until weekly close.

Must Read

Holding a similar EMA that held the market up during 2019. Another hidden bull div printing after $9500 S/R flip. Holding 50RSI on the daily Possible we test $9100-$9200 50EMA on the daily, but anything that holds and closes above $9670 is still bullish for me, said highly regarded trader Jacob Canfield.

With less than 15 hours left to the weekly close, analysts are generally anticipating the support to hold. Above $9,550, key resistance levels exist at $10,300 and $10,900, two levels that rejected bitcoin many times throughout the past two years.

See the rest here:
Bitcoin (BTC) Funds Are Placing More Shorts, Will This Fuel A New Rally? - U.Today

Bears Continue To Gain Momentum In Bitcoin Are We About To Collapse? – Coingape

Bitcoin has been falling ever since meeting resistance at the $10,400 level this past week. During the week, tt found support at around $9,600, however, it was unable to overcome the resistance at $10,190, causing it to drop and fall once again.

Things are now looking troublesome for Bitcoin after not being able to make any movement higher over the past few days. Despite all the latest price falls, Bitcoin remains up by a total of 11% over the past 30-days of trading.

Bitcoin Price Analysis

BTC/USD Daily CHART SHORT TERM

Taking a look at the daily chart above, we can see that Bitcoin has found support at the .382 Fib Retracement, priced at $9,569. During the week, it made a rebound and broke above the $10,000 level again, however, it was unable to break above $10,190 (previous 1.414 Fib Extension) which caused it to reverse and rollover.

The cryptocurrency remains bullish, however, it is very close to becoming neutral. If it drops beneath the $9,000 level, we can consider the market as neutral. It would need to drop beneath $8,200 before we could consider it to be in danger of turning bearish.

Toward the downside, if the sellers break beneath $9,569 the next level of support lies at $9,311 (.5 Fib Retracement). Beneath this, support lies at $9,159 (downside 1.272 Fib Extension), $9,053 (.618 Fib Retracement), and $9,000.

On the other hand, if the buyers rebound here and push higher, resistance lies at $9,815 and $10,000. Above this, additional resistance lies at $10,190 (1.414 Fib Extension), $10,474 (1.618 Fib Extension) and $10,500.

The RSI dipped beneath the 50 level and remained there for the longest period during 2020. If the RSI is unable to climb back above 50 pretty soon, we can expect the moment to shift and for the bears to regain control.

Support: $9,569, $9,311, $9,280, $9,200, $9,169, $9,053, $9,000.

Resistance: $9,637, $9,615, $9,815, $10,000, $10,190, $10,360, $10,474.

Summary

Article Name

Bears Continue To Gain Momentum In Bitcoin - Are We About To Collapse?

Description

Bitcoin saw a 6% price fall over the past week, bringing the price of the coin down to $9,613.It is finding support at the short term .382 Fibonacci Retracement level, however, it certainly looks like the momentum is on the side of the bears at this moment in time.

Author

Yaz Sheikh

Publisher Name

Coin Gape

Publisher Logo

Share on Facebook

Share on Twitter

Share on Linkedin

Share on Telegram

Read the original here:
Bears Continue To Gain Momentum In Bitcoin Are We About To Collapse? - Coingape

Chinese Government-Backed Institute Releases New Ranking of 37 Crypto Projects – Bitcoin News

Chinas Center for Information and Industry Development has published its latest crypto project ranking the first this year. A total of 37 crypto projects, two more than in the previous ranking, were evaluated and ranked overall this month as well as in three separate categories.

Also read: Bitcoin, Tesla Stock, Tron: How Warren Buffett Got His First Bitcoin

The Center for Information and Industry Development (CCID), under Chinas Ministry of Industry and Information Technology, released its first crypto project ranking for the year on Friday. Prior to this, the last one was published in December, with 35 crypto projects ranked. This month, two more were added, bringing the total of ranked projects to 37. In addition to the overall ranking, the center evaluated the crypto projects based on their basic technology, applicability, and creativity. The ranking is updated every two months and this month is the 16th update.

EOS remains top of the overall ranking, followed by Tron and then Ethereum. In December, Tron was in third place with Ethereum in second. This month, Bitcoin fell from the 9th place to the 11th place while Bitcoin Cash dropped from the 27th place to the 34th.

Meanwhile, Nuls dropped from the 4th place to the 10th place, Bitshares from the 8th place to the 24th, Waves from the 12th to 22nd, Zilliqa from the 13th to 25th, and Tezos from the 26th to 33rd. Some projects improved such as Ripple which rose from the 18th place to the 13th and Cosmos from the 24th to the 14th.

Two additions to the list of projects ranked this month are IOST and GXS. The former describes itself as an ultra-fast, decentralized blockchain network based on the next-generation consensus algorithm Proof of Believability (PoB). The latter, also called Gxchain, is a fundamental blockchain for the global data economy, designed to build a trusted data internet of value, according to its website. IOST debuted at number six in the overall ranking. Gxchain was previously ranked but was removed in the October update. It is now back at number seven in the overall ranking.

In terms of the three sub-rankings, EOS scored the highest in the basic technology category, followed by Tron, IOST, GXS, and Steem. For the applicability category, Ethereum tops the ranking, followed by Tron, and Neo. For the creativity category, BTC scored much higher than the other projects. The second place is occupied by Ethereum, then Lisk, and EOS.

The rankings are compiled by the CCID (Qingdao) Blockchain Research Institute, an entity established by the CCID. The evaluation work is carried out in collaboration with multiple organizations, such as the CCID think tank and the China Software Evaluation Center. The result of this assessment will allow the CCID group to provide better technical consulting services for government agencies, business enterprises, research institutes, and technology developers, the center previously explained. The CCID provides professional services to the government, including research, consulting, evaluation, certification, and research and development, its website details.

In January, news.Bitcoin.com reported that the CCID released a report stating that there were more than 33,000 registered blockchain companies in December.

What do you think of Chinas new ranking? Let us know in the comments section below.

Disclaimer: This article is for informational purposes only. It is not an offer or solicitation of an offer to buy or sell, or a recommendation, endorsement, or sponsorship of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Images courtesy of Shutterstock and the CCID.

Did you know you can buy and sell BCH privately using our noncustodial, peer-to-peer Local Bitcoin Cash trading platform? The local.Bitcoin.com marketplace has thousands of participants from all around the world trading BCH right now. And if you need a bitcoin wallet to securely store your coins, you can download one from us here.

A student of Austrian Economics, Kevin found Bitcoin in 2011 and has been an evangelist ever since. His interests lie in Bitcoin security, open-source systems, network effects and the intersection between economics and cryptography.

View original post here:
Chinese Government-Backed Institute Releases New Ranking of 37 Crypto Projects - Bitcoin News

"Tron Is Better Than Ethereum!" A Bitcoin Game – Cryptonews

Vitalik Buterin, Co-founder of Ethereum, and Justin Sun, Founder of Tron. Source: Twitter

Tron (TRX) is a superior platform to Ethereum (ETH), claim multiple Bitcoin (BTC) people.

For several weeks now, certain Bitcoiners have been trolling the Ethereum camp, claiming that Justin Sun and his Tron are coming to take its throne as the platform of choice for blockchain-based apps and development. It might be not surprising, as Ethereans are intensifying their efforts to push the ether is money narrative, thus increasing competition with Bitcoin.

However, the provocations raise a serious question: just how well does Tron compare to Ethereum, objectively? Well, according to a number of industry observers, Tron is a better platform in technical terms, although it isn't as decentralized and doesn't have as healthy a community of developers.

For the sake of context, the whole Tron vs Ethereum debate was kicked off by a viral tweet from Opendime and ColdCard founder Rodolfo Novak (aka @NVK), who posted a picture of himself with Tron founder Justin Sun and Blockstreams CEO Adam Back.

Bitcoin supporters were quick to jump on the bandwagon following this tweet. For example, here's Blockstream CSO Samson Mow claim Tron all-round trounces Ethereum as a platform:

Needless to say, many such tweets come from people who, like Blockstreams Samson Mow and Adam Back, may have an interest in promoting Tron over Ethereum.

Watch the latest reports by Block TV.

Still, some independent industry observers do agree that, at least in technical terms, Tron is more advanced than Ethereum. Glen Goodman, a cryptoasset analyst and the author of The Crypto Trader, points out that it can currently operate at greater scale than Ethereum.

"As things stand, I'd have to say I prefer Tron's blockchain to Ethereum's, due to its far greater transaction capacity," he tells Cryptonews.com. "It can handle far more users and far more activity. But all of that is about to change when Ethereum 2.0 launches, with sharding and a new proof-of-stake model which should allow far more transactions."

Guardian Circle CEO/co-founder Mark Jeffrey also agrees that Tron is slightly ahead of Ethereum technologically. Although, in his opinion, this is mostly a function of its development being more centralized around founder Justin Sun.

"They seem to be more or less equivalent to me," he tells Cryptonews.com. "However, the technical infighting in the ETH community is a significant technical hurdle, especially for ETH's famous scalability issues, and Tron has a strongman Justin Sun defining what is and what is not Tron cannon. In this respect, Tron is stronger technically."

These observations are backed up by data. According to Coin Metrics, Tron has consistently handled more transactions than Ethereum (and Bitcoin) over the past year.

However, Ethereums Vitalik Buterin argued previously that the purpose of a consensus algorithm is to keep a blockchain safe, not to make it fast. According to him, when a blockchain project claims they can do 3,500 transactions per second due to a different algorithm, "what we really mean is We are a centralized pile of trash because we only have seven nodes running the entire thing.

Moreover, in 2018, researchers accused Tron of plagiarizing its code from Ethereum, among other projects, which Justin Sun denies.

Ethereum is the more decentralized blockchain. As Glen Goodman explains, "Ethereum massively outguns Tron when it comes to node numbers, which suggests Ethereum is winning the decentralization battle."

Goodman admits that both cryptocurrencies have witnessed a decline in node numbers over the past year, but Ethereum is still way ahead, with about eight times as many nodes.

He adds, "For both blockchains, there are worries over collusion between different nodes, as most are concentrated in the U.S., China and Europe. Collusion tends to happen more easily when nodes are close to each other geographically, but Ethereum is notable for increasing its node spread in recent months, particularly to Germany, Singapore and Japan."

Mark Jeffrey concurs, with his analysis focusing mostly on the role Ethereum founder Vitalik Buterin plays in development.

"Vitalik refuses to take up the strongman mantle, saying he is 'one voice among many'," he says. "While this sentiment is admirable, it is also less useful to making technical advances quickly."

Lastly, there's the question of community, both in terms of developers and the wider community of supporters. Here, Ethereum wins again.

"Ethereum recently launched its 'One Million Developers" initiative'," explains Glen Goodman. "They claim to already have at least 200,000 developers working in the Ethereum ecosystem and they're aiming for the big million. Tron is thought to have far fewer developers but it's got a huge number of active users, nearly a million, eclipsed only by you guessed it Ethereum, which has nearly one and a half million."

That said, Mark Jeffrey argues that Tron has a much healthier economic and dapp (decentralized app) situation.

"There are basically only six ways to make money right now in the crypto space: gambling, exchanges, and DeFi (decentralized finance) and then referrals to gambling sites, exchanges and DeFi. Tron has the healthiest gambling environment by far. This means secondary offerings like wallets can survive by providing referrals."

Jeffrey adds that Tron itself has been very proactive in spending money on its ecosystem. "It will throw money into better wallets, more exchanges and better dapps quite aggressively. Ethereum has done none of these things." (However, in May 2019, the Ethereum Foundation promised to spend USD 30 million on key projects across the ecosystem over the next year.)

It's this kind of market strategy that leads Jeffrey to predict that Tron may ultimately triumph over Ethereum. "In the end, the platform with the healthiest raw economics will win. Tron is the clear leader in that department right now."This may be true, but predicting what crypto will look like in six months is tricky enough as it is, so predicting which fairly well-matched altcoin platform will 'beat' the other in the next few years is probably impossible. Still, competition is healthy, so let's hope Tron and Ethereum continue to push each other in the near and distant future. And Bitcoiners will always find someone to troll.

____Learn more: Market Deaf to Trons Justin Sun Call to 'Buy His Shitcoin' as TRX Down 4%Ethereum vs. EOS vs. Tron vs. Tezos - How do they Compare?

Go here to see the original:
"Tron Is Better Than Ethereum!" A Bitcoin Game - Cryptonews

CoinGeek London: When Bitcoin SV came of age – CoinGeek

The whole Internet can work this way, said Twetch CEO Josh Petty in his presentation at the CoinGeek London conference. It was a typically bullish sentiment from the two days in which dozens of speakers demonstrated their confidence in the momentum building around Bitcoin SV (BSV).

Superficially, that momentum was felt in the more than doubling of the number of attendees since the last conference in Seoul six months ago. Even more superficially, it was seen the extraordinary width and clarity of the screen at the back of the stagedesigned to be viewed by creatures with at least three eyes.

More importantly, it was noticeable in the way BSV technology and businesses were discussed on stage. Petty announced new features for Twetch, taking the social media app to a slicker, more user-friendly form: Everything you touch and feel is going to be a microtransaction, he said, with no more swipe.

Familiar faces from previous conferences spoke with new certainty about what they were doing and had new achievements to report and announcements to make. Jack Liu of the RelayX wallet provided a moment of drama when he unveiled the new look of his appwhich is essentially a blank screen, the idea being that your camera opens to scan a QR code. More broadly, users will access Relay through other apps, making the integration of money functions almost invisible for users.

Newcomers, such as Thomas J. Lee, from Fundstrat, endorsed and elaborated themes previously only heard from those inside the Bitcoin SV tent. With detailed financial graphs, he predicted a parabolic moment when institutions get serious about cryptosimilar to the effect on Teslas share price when Wall Street started paying attention to its potential (below):

Lees colleague David Grider summarised from Fundstrats recent report on BSV, highlighting BSVs transaction growth and the potential of its nascent businesses. He singled out the coming Maxthon browser, the Baemail, email service and True Reviews as examples of more than 400 projects building on BSV, with more in prospect using the increased functionality provided by the Genesis fork.

The first day ended with a rousing speech by Dr. Craig Wright, which provided a laser-focused summary of his original intentions for Bitcoin as Satoshi Nakamoto and his present-day assessment of the prospects for BSV from microtransactions.

On Friday, there was more. Jeff Chen, the founder and CEO of Maxthon talked about his BSV browser. With his long track record of successful Internet browsers, this is no pipe dream, but a solid business proposition in development.

If you thought BSV innovation was limited to the world as seen through a computer screen, Stephan Nilsson and Ken Hill took us out into the real world. Hill described EHR Data, a new business that plans to revolutionise health information, putting patients in charge. And Nilsson, of UNISOT, demonstrated his app to track an item through a complex supply chain in this case, a haddock.

Finally, at the end of the second day, the veteran economist and technology commentator George Gilder, another newcomer to BSV gatherings, put Satoshis ideas into perspective. He was confident that BSV solves the two-fold scandal in the world economy, namely Internet security and the excesses of global currency trading.

Were now engaging in forging a new system of the world, he said. Its a system to replace the failed economic model of Google. In an information age, economies can change as fast as minds. Were moving to a world in which security comes first, everything is correctly valued and nothing is free.

Gilder gave an account of how he had been persuaded that Dr. Craig Wright is Satoshi. Sitting next to him in the final session of the day, he said, to applause that I think you can safely celebrate Craig. It was a fitting tribute to the man who had already changed the lives of everyone at CoinGeek London, all of whom are convinced that the best is yet to come.

New to Bitcoin? Check out CoinGeeksBitcoin for Beginnerssection, the ultimate resource guide to learn more about Bitcoinas originally envisioned by Satoshi Nakamotoand blockchain.

See the original post here:
CoinGeek London: When Bitcoin SV came of age - CoinGeek

Forget Bitcoin, buy-to-let, and gold. Im investing in a Stocks and Shares ISA to gain financial freedom – Yahoo Finance UK

Everybody dreams of financial freedom, but how do you achieve it? My answer is to invest in a diversified spread of stocks and shares, tax-efficiently through a Stocks and Shares ISA.

I think this is a better way of building your long-term wealth than investing in other asset classes, such as Bitcoin, buy-to-let property or gold, all of which have done well at certain points, but may now be past their best.

There is no doubt about it, Bitcoin catches the eye. At the start of last year, it was trading at around $3,500. Lately, it has been bobbing around the $10,000 mark, which means if you bought 12 months ago, you would have tripled your money.

The big problem with Bitcoin is its massive volatility. Its price can rise or fall by hundreds or even thousands of dollars in a matter of days. That makes it too volatile to rely upon for what is arguably your most important financial task, building your long-term retirement wealth.

There is nothing wrong with having a bit of gold in your portfolio, to offset any losses if stock markets fall. The price is up around 33% over the last five years, so you could enjoy some capital growth, too.

I wouldnt put too much into gold, though. The precious metal doesnt pay any income, which makes you wholly dependent on price movements to make a profit. If coronavirus worries recede and confidence recovers, gold could fall back, and sharply.

I was a big fan of buy-to-let property until former Chancellor George Osborne unleashed his multi-pronged tax attack in 2015. The 3% stamp duty surcharge and phasing-out of mortgage interest tax relief will eat into your profits, while you still have all the work of buying and managing a property, and finding and replacing tenants.

I keep the vast majority of my retirement pot in the stock market, because I believe this will generate the best returns over the longer run. History suggests equities can deliver an average annual return of around 7% a year, from share price growth and reinvested dividend income.

Now maybe a good opportunity to invest, as current uncertainties have knocked the FTSE 100, throwing up plenty of bargain stocks.

You could start by investing in a spread of UK blue chips, for example spirits giant Diageo, housebuilder Taylor Wimpey, or Lloyds Banking Group, which currently yields 6.2%.

You could supplement this with an exchange-trading fund (ETF) tracking the FTSE All-Share, sold by managers such as iShares and Vanguard.Always invest within your Stocks and Shares ISA allowance, which allows you to put away anything up to 20,000 this financial year and take all your returns free of tax, for life. This combination of tax-free capital growth and passive income is a better way to achieve financial independence than Bitcoin, gold, and buy-to-let, in my view.

The post Forget Bitcoin, buy-to-let, and gold. Im investing in a Stocks and Shares ISA to gain financial freedom appeared first on The Motley Fool UK.

More reading

Harvey Jones has no position in any of the shares mentioned. The Motley Fool UK has recommended Diageo and Lloyds Banking Group. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors.

Motley Fool UK 2020

See more here:
Forget Bitcoin, buy-to-let, and gold. Im investing in a Stocks and Shares ISA to gain financial freedom - Yahoo Finance UK

What is machine learning? Everything you need to know | ZDNet

Machine learning is enabling computers to tackle tasks that have, until now, only been carried out by people.

From driving cars to translating speech, machine learning is driving an explosion in the capabilities of artificial intelligence -- helping software make sense of the messy and unpredictable real world.

But what exactly is machine learning and what is making the current boom in machine learning possible?

At a very high level, machine learning is the process of teaching a computer system how to make accurate predictions when fed data.

Those predictions could be answering whether a piece of fruit in a photo is a banana or an apple, spotting people crossing the road in front of a self-driving car, whether the use of the word book in a sentence relates to a paperback or a hotel reservation, whether an email is spam, or recognizing speech accurately enough to generate captions for a YouTube video.

The key difference from traditional computer software is that a human developer hasn't written code that instructs the system how to tell the difference between the banana and the apple.

Instead a machine-learning model has been taught how to reliably discriminate between the fruits by being trained on a large amount of data, in this instance likely a huge number of images labelled as containing a banana or an apple.

Data, and lots of it, is the key to making machine learning possible.

Machine learning may have enjoyed enormous success of late, but it is just one method for achieving artificial intelligence.

At the birth of the field of AI in the 1950s, AI was defined as any machine capable of performing a task that would typically require human intelligence.

AI systems will generally demonstrate at least some of the following traits: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

Alongside machine learning, there are various other approaches used to build AI systems, including evolutionary computation, where algorithms undergo random mutations and combinations between generations in an attempt to "evolve" optimal solutions, and expert systems, where computers are programmed with rules that allow them to mimic the behavior of a human expert in a specific domain, for example an autopilot system flying a plane.

Machine learning is generally split into two main categories: supervised and unsupervised learning.

This approach basically teaches machines by example.

During training for supervised learning, systems are exposed to large amounts of labelled data, for example images of handwritten figures annotated to indicate which number they correspond to. Given sufficient examples, a supervised-learning system would learn to recognize the clusters of pixels and shapes associated with each number and eventually be able to recognize handwritten numbers, able to reliably distinguish between the numbers 9 and 4 or 6 and 8.

However, training these systems typically requires huge amounts of labelled data, with some systems needing to be exposed to millions of examples to master a task.

As a result, the datasets used to train these systems can be vast, with Google's Open Images Dataset having about nine million images, its labeled video repository YouTube-8M linking to seven million labeled videos and ImageNet, one of the early databases of this kind, having more than 14 million categorized images. The size of training datasets continues to grow, with Facebook recently announcing it had compiled 3.5 billion images publicly available on Instagram, using hashtags attached to each image as labels. Using one billion of these photos to train an image-recognition system yielded record levels of accuracy -- of 85.4 percent -- on ImageNet's benchmark.

The laborious process of labeling the datasets used in training is often carried out using crowdworking services, such as Amazon Mechanical Turk, which provides access to a large pool of low-cost labor spread across the globe. For instance, ImageNet was put together over two years by nearly 50,000 people, mainly recruited through Amazon Mechanical Turk. However, Facebook's approach of using publicly available data to train systems could provide an alternative way of training systems using billion-strong datasets without the overhead of manual labeling.

In contrast, unsupervised learning tasks algorithms with identifying patterns in data, trying to spot similarities that split that data into categories.

An example might be Airbnb clustering together houses available to rent by neighborhood, or Google News grouping together stories on similar topics each day.

The algorithm isn't designed to single out specific types of data, it simply looks for data that can be grouped by its similarities, or for anomalies that stand out.

The importance of huge sets of labelled data for training machine-learning systems may diminish over time, due to the rise of semi-supervised learning.

As the name suggests, the approach mixes supervised and unsupervised learning. The technique relies upon using a small amount of labelled data and a large amount of unlabelled data to train systems. The labelled data is used to partially train a machine-learning model, and then that partially trained model is used to label the unlabelled data, a process called pseudo-labelling. The model is then trained on the resulting mix of the labelled and pseudo-labelled data.

The viability of semi-supervised learning has been boosted recently by Generative Adversarial Networks ( GANs), machine-learning systems that can use labelled data to generate completely new data, for example creating new images of Pokemon from existing images, which in turn can be used to help train a machine-learning model.

Were semi-supervised learning to become as effective as supervised learning, then access to huge amounts of computing power may end up being more important for successfully training machine-learning systems than access to large, labelled datasets.

A way to understand reinforcement learning is to think about how someone might learn to play an old school computer game for the first time, when they aren't familiar with the rules or how to control the game. While they may be a complete novice, eventually, by looking at the relationship between the buttons they press, what happens on screen and their in-game score, their performance will get better and better.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has beaten humans in a wide range of vintage video games. The system is fed pixels from each game and determines various information about the state of the game, such as the distance between objects on screen. It then considers how the state of the game and the actions it performs in game relate to the score it achieves.

Over the process of many cycles of playing the game, eventually the system builds a model of which actions will maximize the score in which circumstance, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Everything begins with training a machine-learning model, a mathematical function capable of repeatedly modifying how it operates until it can make accurate predictions when given fresh data.

Before training begins, you first have to choose which data to gather and decide which features of the data are important.

A hugely simplified example of what data features are is given in this explainer by Google, where a machine learning model is trained to recognize the difference between beer and wine, based on two features, the drinks' color and their alcoholic volume (ABV).

Each drink is labelled as a beer or a wine, and then the relevant data is collected, using a spectrometer to measure their color and hydrometer to measure their alcohol content.

An important point to note is that the data has to be balanced, in this instance to have a roughly equal number of examples of beer and wine.

The gathered data is then split, into a larger proportion for training, say about 70 percent, and a smaller proportion for evaluation, say the remaining 30 percent. This evaluation data allows the trained model to be tested to see how well it is likely to perform on real-world data.

Before training gets underway there will generally also be a data-preparation step, during which processes such as deduplication, normalization and error correction will be carried out.

The next step will be choosing an appropriate machine-learning model from the wide variety available. Each have strengths and weaknesses depending on the type of data, for example some are suited to handling images, some to text, and some to purely numerical data.

Basically, the training process involves the machine-learning model automatically tweaking how it functions until it can make accurate predictions from data, in the Google example, correctly labeling a drink as beer or wine when the model is given a drink's color and ABV.

A good way to explain the training process is to consider an example using a simple machine-learning model, known as linear regression with gradient descent. In the following example, the model is used to estimate how many ice creams will be sold based on the outside temperature.

Imagine taking past data showing ice cream sales and outside temperature, and plotting that data against each other on a scatter graph -- basically creating a scattering of discrete points.

To predict how many ice creams will be sold in future based on the outdoor temperature, you can draw a line that passes through the middle of all these points, similar to the illustration below.

Once this is done, ice cream sales can be predicted at any temperature by finding the point at which the line passes through a particular temperature and reading off the corresponding sales at that point.

Bringing it back to training a machine-learning model, in this instance training a linear regression model would involve adjusting the vertical position and slope of the line until it lies in the middle of all of the points on the scatter graph.

At each step of the training process, the vertical distance of each of these points from the line is measured. If a change in slope or position of the line results in the distance to these points increasing, then the slope or position of the line is changed in the opposite direction, and a new measurement is taken.

In this way, via many tiny adjustments to the slope and the position of the line, the line will keep moving until it eventually settles in a position which is a good fit for the distribution of all these points, as seen in the video below. Once this training process is complete, the line can be used to make accurate predictions for how temperature will affect ice cream sales, and the machine-learning model can be said to have been trained.

While training for more complex machine-learning models such as neural networks differs in several respects, it is similar in that it also uses a "gradient descent" approach, where the value of "weights" that modify input data are repeatedly tweaked until the output values produced by the model are as close as possible to what is desired.

Once training of the model is complete, the model is evaluated using the remaining data that wasn't used during training, helping to gauge its real-world performance.

To further improve performance, training parameters can be tuned. An example might be altering the extent to which the "weights" are altered at each step in the training process.

A very important group of algorithms for both supervised and unsupervised machine learning are neural networks. These underlie much of machine learning, and while simple models like linear regression used can be used to make predictions based on a small number of data features, as in the Google example with beer and wine, neural networks are useful when dealing with large sets of data with many features.

Neural networks, whose structure is loosely inspired by that of the brain, are interconnected layers of algorithms, called neurons, which feed data into each other, with the output of the preceding layer being the input of the subsequent layer.

Each layer can be thought of as recognizing different features of the overall data. For instance, consider the example of using machine learning to recognize handwritten numbers between 0 and 9. The first layer in the neural network might measure the color of the individual pixels in the image, the second layer could spot shapes, such as lines and curves, the next layer might look for larger components of the written number -- for example, the rounded loop at the base of the number 6. This carries on all the way through to the final layer, which will output the probability that a given handwritten figure is a number between 0 and 9.

See more: Special report: How to implement AI and machine learning (free PDF)

The network learns how to recognize each component of the numbers during the training process, by gradually tweaking the importance of data as it flows between the layers of the network. This is possible due to each link between layers having an attached weight, whose value can be increased or decreased to alter that link's significance. At the end of each training cycle the system will examine whether the neural network's final output is getting closer or further away from what is desired -- for instance is the network getting better or worse at identifying a handwritten number 6. To close the gap between between the actual output and desired output, the system will then work backwards through the neural network, altering the weights attached to all of these links between layers, as well as an associated value called bias. This process is called back-propagation.

Eventually this process will settle on values for these weights and biases that will allow the network to reliably perform a given task, such as recognizing handwritten numbers, and the network can be said to have "learned" how to carry out a specific task

An illustration of the structure of a neural network and how training works.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently devising a more efficient design for an effective type of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The AI technique of evolutionary algorithms is even being used to optimize neural networks, thanks to a process called neuroevolution. The approach was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

While machine learning is not a new technique, interest in the field has exploded in recent years.

This resurgence comes on the back of a series of breakthroughs, with deep learning setting new records for accuracy in areas such as speech and language recognition, and computer vision.

What's made these successes possible are primarily two factors, one being the vast quantities of images, speech, video and text that is accessible to researchers looking to train machine-learning systems.

But even more important is the availability of vast amounts of parallel-processing power, courtesy of modern graphics processing units (GPUs), which can be linked together into clusters to form machine-learning powerhouses.

Today anyone with an internet connection can use these clusters to train machine-learning models, via cloud services provided by firms like Amazon, Google and Microsoft.

As the use of machine-learning has taken off, so companies are now creating specialized hardware tailored to running and training machine-learning models. An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train models for Google DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end GPUs, and the recently announced third-generation TPUs able to accelerate training and inference even further.

As hardware becomes increasingly specialized and machine-learning software frameworks are refined, it's becoming increasingly common for ML tasks to be carried out on consumer-grade phones and computers, rather than in cloud datacenters. In the summer of 2018, Google took a step towards offering the same quality of automated translation on phones that are offline as is available online, by rolling out local neural machine translation for 59 languages to the Google Translate app for iOS and Android.

Perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, a feat that wasn't expected until 2026. Go is an ancient Chinese game whose complexity bamboozled computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational standpoint. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training the deep-learning networks needed can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

DeepMind continue to break new ground in the field of machine learning. In July 2018, DeepMind reported that its AI agents had taught themselves how to play the 1999 multiplayer 3D first-person shooter Quake III Arena, well enough to beat teams of human players. These agents learned how to play the game using no more information than the human players, with their only input being the pixels on the screen as they tried out random actions in game, and feedback on their performance during each game.

More recently DeepMind demonstrated an AI agent capable of superhuman performance across multiple classic Atari games, an improvement over earlier approaches where each AI agent could only perform well at a single game. DeepMind researchers say these general capabilities will be important if AI research is to tackle more complex real-world domains.

Machine learning systems are used all around us, and are a cornerstone of the modern internet.

Machine-learning systems are used to recommend which product you might want to buy next on Amazon or video you want to may want to watch on Netflix.

Every Google search uses multiple machine-learning systems, to understand the language in your query through to personalizing your results, so fishing enthusiasts searching for "bass" aren't inundated with results about guitars. Similarly Gmail's spam and phishing-recognition systems use machine-learning trained models to keep your inbox clear of rogue messages.

One of the most obvious demonstrations of the power of machine learning are virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

Each relies heavily on machine learning to support their voice recognition and ability to understand natural language, as well as needing an immense corpus to draw upon to answer queries.

But beyond these very visible manifestations of machine learning, systems are starting to find a use in just about every industry. These exploitations include: computer vision for driverless cars, drones and delivery robots; speech and language recognition and synthesis for chatbots and service robots; facial recognition for surveillance in countries like China; helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs in healthcare; allowing for predictive maintenance on infrastructure by analyzing IoT sensor data; underpinning the computer vision that makes the cashierless Amazon Go supermarket possible, offering reasonably accurate transcription and translation of speech for business meetings -- the list goes on and on.

Deep-learning could eventually pave the way for robots that can learn directly from humans, with researchers from Nvidia recently creating a deep-learning system designed to teach a robot to how to carry out a task, simply by observing that job being performed by a human.

As you'd expect, the choice and breadth of data used to train systems will influence the tasks they are suited to.

For example, in 2016 Rachael Tatman, a National Science Foundation Graduate Research Fellow in the Linguistics Department at the University of Washington, found that Google's speech-recognition system performed better for male voices than female ones when auto-captioning a sample of YouTube videos, a result she ascribed to 'unbalanced training sets' with a preponderance of male speakers.

As machine-learning systems move into new areas, such as aiding medical diagnosis, the possibility of systems being skewed towards offering a better service or fairer treatment to particular groups of people will likely become more of a concern.

A heavily recommended course for beginners to teach themselves the fundamentals of machine learning is this free Stanford University and Coursera lecture series by AI expert and Google Brain founder Andrew Ng.

Another highly-rated free online course, praised for both the breadth of its coverage and the quality of its teaching, is this EdX and Columbia University introduction to machine learning, although students do mention it requires a solid knowledge of math up to university level.

Technologies designed to allow developers to teach themselves about machine learning are increasingly common, from AWS' deep-learning enabled camera DeepLens to Google's Raspberry Pi-powered AIY kits.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to the hardware needed to train and run machine-learning models, with Google letting Cloud Platform users test out its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

This cloud-based infrastructure includes the data stores needed to hold the vast amounts of training data, services to prepare that data for analysis, and visualization tools to display the results clearly.

Newer services even streamline the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise, similar to Microsoft's Azure Machine Learning Studio. In a similar vein, Amazon recently unveiled new AWS offerings designed to accelerate the process of training up machine-learning models.

For data scientists, Google's Cloud ML Engine is a managed machine-learning service that allows users to train, deploy and export custom machine-learning models based either on Google's open-sourced TensorFlow ML framework or the open neural network framework Keras, and which now can be used with the Python library sci-kit learn and XGBoost.

Database admins without a background in data science can use Google's BigQueryML, a beta service that allows admins to call trained machine-learning models using SQL commands, allowing predictions to be made in database, which is simpler than exporting data to a separate machine learning and analytics environment.

For firms that don't want to build their own machine-learning models, the cloud platforms also offer AI-powered, on-demand services -- such as voice, vision, and language recognition. Microsoft Azure stands out for the breadth of on-demand services on offer, closely followed by Google Cloud Platform and then AWS.

Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella.

Early in 2018, Google expanded its machine-learning driven services to the world of advertising, releasing a suite of tools for making more effective ads, both digital and physical.

While Apple doesn't enjoy the same reputation for cutting edge speech recognition, natural language processing and computer vision as Google and Amazon, it is investing in improving its AI services, recently putting Google's former chief in charge of machine learning and AI strategy across the company, including the development of its assistant Siri and its on-demand machine learning service Core ML.

In September 2018, NVIDIA launched a combined hardware and software platform designed to be installed in datacenters that can accelerate the rate at which trained machine-learning models can carry out voice, video and image recognition, as well as other ML-related services.

The NVIDIA TensorRT Hyperscale Inference Platform uses NVIDIA Tesla T4 GPUs, which delivers up to 40x the performance of CPUs when using machine-learning models to make inferences from data, and the TensorRT software platform, which is designed to optimize the performance of trained neural networks.

There are a wide variety of software frameworks for getting started with training and running machine-learning models, typically for the programming languages Python, R, C++, Java and MATLAB.

Famous examples include Google's TensorFlow, the open-source library Keras, the Python library Scikit-learn, the deep-learning framework CAFFE and the machine-learning library Torch.

Read more from the original source:
What is machine learning? Everything you need to know | ZDNet

This AI Researcher Thinks We Have It All Wrong – Forbes

Dr. Luis Perez-Breva

Luis Perez-Breva is an MIT professor and the faculty director of innovation teams at the MIT School or Engineering. He is also an entrepreneur and part of The Martin Trust Center for MIT Entrepreneurship. Luis works to see how we can use technology to make our lives better and also on how we can work to get new technology out into the world. On a recent AI Today podcast, Professor Perez-Breva managed to get us to think deeply into our understanding of both artificial intelligence and machine learning.

Are we too focused on data?

Anyone who has been following artificial intelligence and machine learning knows the vital centrality of data. Without data, we cant train machine learning models. And without machine learning models, we dont have a way for systems to learn from experience. Surely, data needs to be the center of our attention to make AI systems a reality.

However, Dr. Perez-Breva thinks that we are overly focusing on data and perhaps that extensive focus is causing goals for machine learning and AI to go astray. According to Luis, so much focus is put into obtaining data that we judge how good a machine learning system is by how much data was collected, how large the neural network is, and how much training data was used. When you collect a lot of data you are using that data to build systems that are primarily driven by statistics. Luis says that we latch onto statistics when we feed AI so much data, and that we ascribe to systems intelligence, when in reality, all we have done is created large probabilistic systems that by virtue of large data sets exhibit things we ascribe to intelligence. He says that when our systems arent learning as we want, the primary gut reaction is to give these AI system more data so that we dont have to think as much about the hard parts about generalization and intelligence.

Many would argue that there are some areas where you do need data to help teach AI. Computers are better able to learn image recognition and similar tasks by having more data. The more data, the better the networks, and the more accurate the results. On the podcast, Luis asked whether deep learning is great enough that this works or if we have a big enough data set that image recognition now works. Basically: is it the algorithm or just the sheer quantity of data that is making this work?

Rather, what Luis argues is that if we can find a better way to structure the system as a whole, then the AI system should be able to reason through problems, even with very limited data. Luis compares using machine learning in every application to the retail world. He talks about how physical stores are seeing the success in online stores and trying to copy on that success. One of the ways they are doing this is by using apps to navigate stores. Luis mentioned that he visited a Target where he had to use his phone to navigate the store which was harder than being able to look at signs. Having a human to ask questions and talk to is both faster and part of the experience of being in a brick and mortar retail location. Luis says he would much rather have a human to interact with at one of these locations than a computer.

Is the problem deep learning?

He compares this to machine learning by saying that machine learning has a very narrow application. If you try to apply machine learning to every aspect of AI that you will end up with issues like he did at the Target. Basically looking at neural networks as a hammer and every AI problem as a nail. No one technology or solution works for every application. Perhaps deep learning only works because of vast quantities of data? Maybe theres a better algorithm that can generalize better, apply knowledge learned in one domain to another better, and use smaller amounts of data to get much better quality insights.

People have tried recently to automate many of the jobs that people do. Throughout history, Luis says that technology has killed businesses when it tries to replace humans. Technology and businesses are successful when they expand on what humans can do. Attempting to replace humans is a difficult task and one that is going to lead companies down the road to failure. As humans, he points out, we crave human interaction. Even the age that is constantly on their technology desires human interaction greatly.

Luis also makes a point that while many people mistakenly confuse automation and AI. Automation is using a computer to carry out specific tasks, it is not the creation of intelligence. This is something that many are mentioning on several occasions. Indeed, its the fear of automation and the fictional superintelligence that has many people worried about AI. Dr. Perez-Breva makes the point that many ascribe to machines human characteristics. But this should not be the case with AI system.

Rather, he sees AI systems more akin to a new species with a different mode of intelligence than humans. His opinion is that researchers are very far from creating an AI that is similar to what you will find in books and movies. He blames movies for giving people the impression of robots (AI) killing people and being dangerous technologies. While there are good robots in movies there are few of them and they get pushed to the side by bad robots. He points out that we need to move away from this pushing images of bad robots. Our focus needs to be on how artificial intelligence can help humans grow. It would be beneficial if the movie-making industry could help with this. As such, AI should be thought of as a new intelligent species were trying to create, not something that is meant to replace us.

A positive AI future

Despite negative images and talk, Luis is sure that artificial intelligence is here to stay. At least for a while. So many companies have made large investments into AI that it would be difficult for them to just stop using them or to stop the development.

As a final question in the interview, Luis was asked where he sees the industry of artificial intelligence going. Prefacing his answer with the fact that based on the earlier discussion people are investing in machine learning and not true artificial intelligence, Luis said that he is happy in the investment that businesses are making in what they call AI. He believes that these investments will help the development of this technology to stay around for a minimum of four years.

Once we can stop comparing humans to artificial intelligence, Luis believes that we will see great advancements in what AI can do. He believes that AI has the power to work alongside humans to unlock knowledge and tasks that we werent previously able to do. The point when this happens, he doesnt believe is that far away. We are getting closer to it every day.

Many of Luiss ideas are contrary to popular beliefs by many people who are interested in the world of artificial intelligence. At the same time, these ideas that he presents are presented in a very logical manner and are very thought-provoking. The only way that we will be able to see what is right or where his ideas go is time.

Go here to see the original:
This AI Researcher Thinks We Have It All Wrong - Forbes

Removing the robot factor from AI – Gigabit Magazine – Technology News, Magazine and Website

AI and machine learning have something of an image problem.

Theyve never been quite so widely discussed as topics, or, arguably, their potential so widely debated. This is, to some extent, part of the problem. Artificial Intelligence can, still, be anything, achieve anything. But until its results are put into practice for people, it remains a misunderstood concept, especially to the layperson.

While well-established industry thought leaders are rightly championing the fact that AI has the potential to be transformative and capable of a wide range of solutions, the lack of context for most people is fuelling fears that it is simply going to replace peoples roles and take over tasks, wholesale. It also ignores the fact that AI applications have been quietly assisting peoples jobs, in a light touch manner, for some time now and people are still in those roles.

Many people are imagining AI to be something it is not. Given the technology is still in a fast-development phase, some people think it is helpful to consider the tech as a type of plug and play, black box technology. Some believe this helps people to put it into the context of how it will work and what it will deliver for businesses. In our opinion, this limits a true understanding of its potential and what it could be delivering for companies day in, day out.

The hyperbole is also not helping. The statements we use AI and our products AI driven have already become well-worn by enthusiastic salespeople and marketeers. While theres a great sales case to be made by that exciting assertion, its rarely speaking the truth about the situation. What is really meant by the current use of artificial intelligence? Arguably, AI is not yet a thing in its own right; i.e the capability of machines to be able to do the things which people do instinctively, which machines instinctively do not. Instead of being excited by hearing the phrase we do AI!, people should see it as a red flag to dig deeper into the technology and the AI capability in question.

SEE ALSO:

Machine learning, similarly, doesnt benefit from sci-fi associations or big sales patter bravado. In its simplest form, while machine learning sounds like a defined and independent process, it is actually a technique to deliver AI functions. Its maths, essentially, applied alongside data, processing power and technology to deliver an AI capability. Machine learning models dont execute actions or do anything themselves, unless people put them to use. They are still human tools, to be deployed by someone to undertake a specific action.

The tools and models are only as good as the human knowledge and skills programming them. People, especially in the legal sectors autologyx works with, are smart, adaptable and vastly knowledgeable. They can quickly shift from one case to another, and have their own methods and processes of approaching problem solving in the workplace. Where AI is coming in to lift the load is on lengthy, detailed, and highly repetitive tasks such as contract renewals. Humans can get understandably bored when reviewing highly repetitive, vast volumes of contracts to change just a few clauses and update the document. A machine learning solution does notnget bored, and performs consistently with a high degree of accuracy, freeing those legal teams up to work on more interesting, varied, or complicated casework.

Together, AI, machine learning and automation are the arms and armour businesses across a range of sectors need to acquire to adapt and continue to compete in the future. The future of the legal industry, for instance, is still a human one where knowledge of people will continue to be an asset. AI in that sector is more focused on codifying and leveraging that intelligence and while the machine and AI models learn and grow from people, so those people will continue to grow and expand their knowledge within the sector too. Today, AI and ML technologies are only as good as the people power programming them.

As Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence put it, AI is neither good nor evil. Its a tool. A technology for us to use. How we choose to apply it is entirely up to us.

By Ben Stoneham, founder and CEO, autologyx

Continue reading here:
Removing the robot factor from AI - Gigabit Magazine - Technology News, Magazine and Website