Cryptocurrency Market Research Study including Growth Factors, Types and Application by regions from 2020 to 2026 – The Daily Philadelphian

Cryptocurrency MarketResearch Report covers the present scenario and the growth prospects of the Keyword Industry for 2020-2026. The report covers the market landscape and its growth prospects over the coming years and discussion of the Leading Companies effective in this market. Cryptocurrency Market has been prepared based on an in-depth market analysis with inputs from industry experts. To calculate the market size, the report considers the revenue generated from the sales of Keyword globally.

The Cryptocurrency market research study considers the present scenario of the Cryptocurrency industry and its market dynamics for the period 20202026. The report covers both the demand and supply aspects of the market. Cryptocurrency market research report provides market sizing, share, forecast estimation & approach, Covid19 aftermath Analyst view, strategic analysis, revenue opportunities, industry trends, competition outlook, insights and growth relevancy mapping, growth drivers, and vendor analysis.

Cryptocurrency Market reports under the Cryptocurrency industry are supported by various macro and microeconomic factors impacting the industry. We browse through historical data and provide an overview of the emerging markets and the next big opportunities for investors within the niche market. After COVID pandemic, there are increasing demand from emerging countries provides a good business opportunity for companies to invest in coming years. Our reports are updated with changing industry regulatory policies and offer insight depending on clients requirement.

The global Cryptocurrency market has been subjected to several regulatory compliances and crucial coding terminology over the years. Adherence to regulatory standards remains crucial for vendors.

Request for Sample Copy of Report to get more information about the market @https://in4research.com/sample-request/37

The study profiles and examines leading companies and other prominent companies operating in the Cryptocurrency industry.

List of key players profiled in the report:

Cryptocurrency Market segmentation as per below:

Based on Product Types:

Applications covered in this report:

COVID-19 Impact on Cryptocurrency Industry

The outbreak of COVID-19 has bought along a global recession, which has impacted several industries. Along with this impact COVID Pandemic has also generated few new business opportunities for Cryptocurrency market. Overall competitive landscape and market dynamics of Cryptocurrency has been disrupted due to this pandemic. All these disruptions and impacts has been analysed quantifiably in this report, which is backed by market trends, events and revenue shift analysis. COVID impact analysis also covers strategic adjustments for Tier 1, 2 and 3 players of Cryptocurrency market.

The competitive environment in the Cryptocurrency market is intensifying. The market currently witnesses the presence of several major as well as other prominent vendors, contributing toward the market growth. However, the market is observing an influx of local vendors entering the market.

Know more COVID-19 Impact Analysis & Post Covid Opportunities @https://in4research.com/impactC19-request/37

Vendors can consider targeting key regions such as APAC, North America, and Europe to gather maximum customer attention. Countries in the APAC region such as China, India, and Japan among others are expected to display significant growth prospects in the future due to high economic growth forecasts along with huge population statistics leading to high consumption of goods and products.

Regional Overview & Analysis of Cryptocurrency Market:

The changing regulatory compliance scenario and the growing purchasing power among consumers are likely to promise well for the North America market. New product development and technological advancements remain key for competitors to capitalize upon in the Cryptocurrency industry across the globe.

Ask for more details or request custom reports from our industry experts at @https://in4research.com/customization/37

Key Market Insights:

Go here to read the rest:
Cryptocurrency Market Research Study including Growth Factors, Types and Application by regions from 2020 to 2026 - The Daily Philadelphian

OECD shines light on the future of cryptocurrency taxation – Forkast News

Trying to come to grips with the nascent technology of cryptocurrency, with all its concomitant risks and potential, governments around the world have taken very different approaches toward cryptocurrency taxation and penalties for crypto tax evasion. It is not easy to strike the right balance. Yet it is also imperative to develop cryptocurrency taxation systems that are fair, continue to encourage innovation, close the loopholes on tax cheats and offer companies as well as investors clarity so that they can carry out financial planning and make informed investment and business decisions.

Clear and consistent regulation is needed for dealing with virtual assets, because in order to work with them companies need to understand the framework and the rulebook they are working under, Douglas Borthwick, chief marketing officer of digital assets trading platform INX, told Forkast.News. Without a clear-cut tax policy in place, Borthwick added, investors can hardly be expected to have faith in the value of both their own assets and those underlying the industry at large.

Recognizing that there are many issues, gaps and unanswered questions in the emerging field of cryptocurrency taxation, the Organisation for Economic Co-operation and Development (OECD) has published Taxing Virtual Currencies: An Overview of Tax Treatments and Emerging Tax Policy Issues in advance of its 2020 Global Blockchain Policy Forum taking place this week.

Today, the OECD blockchain forum will offer a special Deep Dive panel discussion titled, Crypto-tax Ensure a robust and transparent tax policy framework. The panels speakers will include a vice president of Coinbase, a U.S. Department of Treasury senior counsel and other tax law and policy experts.

The OECDs cryptocurrency tax report, which was presented to G20 finance ministers and central bank governors last month, analyzes how 50 jurisdictions treat crypto-assets. The report also surveys emerging issues such as the rise of DeFi (decentralized finance) and central bank-backed digital currencies (CBDCs), which has been rapidly gaining traction around the world this year with China, France, Australia, Cambodia and many other countries all now racing to develop their own.

One of the key findings of the OECD crypto tax report is the importance of a coherent policy toward cryptocurrency as well as the implications of crypto tax evasion, which until now are issues that have largely been neglected in favor of cryptos macroeconomic and anti-money laundering considerations.

The very nature of cryptocurrency is also what complicates its taxation. Crypto in its purest form is decentralized and anonymous, Borthwick said. While these qualities define its potential, they also engender possibilities for exploitation. The challenge facing tax authorities across the board, Borthwick said, is to see where they can use cryptos best attributes, while limiting its worst.

The most common way that countries have attempted to achieve this is by taxing all income from mining and cryptocurrency exchanges as capital gains, according to the OECD report. Few countries make a distinction between business and personal activity. Virtual currencies, according to the OECD, also form part of a taxpayers assets and are taxable under wealth and inheritance taxes.

See related post: How IRS treats crypto staking: tax issues every crypto investor should know

In its recommendations to policymakers, the OECD report emphasizes the need for providing clear guidance and legislative frameworks for the tax treatment of crypto-assets and virtual currencies, and allowing frequent updates as is necessary to keep up with such a fast-moving, innovative field. Accordingly, appropriate guidance is urged with regards to other blockchain innovations for which existing tax treatments may not be appropriate.

The OECD report also highlights the need to improve compliance and suggests simplifying rules of valuation as one way to do so.

Overall, the OECD recommends that the direction of cryptocurrency tax policy should correspond with that of other policies in related sectors, such as environmental impact. The report points out that cryptocurrency mining can be very energy-intensive. Tax policy should also align with the worldwide shift toward electronic payment systems as a replacement for cash, a trend that has accelerated during the Covid-19 pandemic.

Considerable scope remains to improve guidance on tax treatments, particularly in the emerging areas of stablecoins, proof-of-stake consensus mechanisms, and decentralized finance, Grace Perez-Navarro, deputy director of the OECD Centre for Tax Policy and Administration, told Forkast.News. The OECDs Taxing Virtual Currencies report considers these issues and stresses that clearer guidance would provide certainty for taxpayers and facilitate compliance.

More:
OECD shines light on the future of cryptocurrency taxation - Forkast News

The Korean War’s Forgotten Lessons on the Evil of Intervention – Consortium News

The secrecy and deceit surrounding U.S. war crimes has had catastrophic consequences in this century, writes JamesBovard.

With her brother on her back a war-weary Korean girl passes a stalled M-26 tank in Haengju, Korea, June 9, 1951. (U.S. Army, Maj. R.V. Spencer)

By James BovardJimBovard.com

This year is the 70th anniversary of the start of the Korean War, a conflict from which Washington policymakers learned nothing. Almost 40,000 American soldiers died in that conflict, which should have permanently vaccinated the nation against the folly and evil of foreign intervention. Instead, the war was retroactively redefined. As President Barack Obama declared in 2013, That war was no tie. Korea was a victory.

When politicians or generals appear itching to pull the United States into another foreign war, remember that truth is routinely the first casualty.[Click to Tweet]

The war began with what President Harry Truman claimed was a surprise invasion on June 25, 1950, by the North Korean army across the dividing line with South Korea that was devised after World War II. But the U.S. government had ample warnings of the pending invasion. According to the late Justin Raimondo, founder of antiwar.com, the conflict actually started with a series of attacks by South Korean forces, aided by the U.S. military:

From 1945-1948, American forces aided [South Korean President Syngman] Rhee in a killing spree that claimed tens of thousands of victims: the counterinsurgency campaign took a high toll in Kwangju, and on the island of Cheju-do where as many as 60,000 people were murdered by Rhees US-backed forces.

The North Korean army quickly routed both South Korean and U.S. forces. A complete debacle was averted after Gen. Douglas MacArthur masterminded a landing of U.S. troops at Inchon. After he routed the North Korean forces, MacArthur was determined to continue pushing northward regardless of the danger of provoking a much broader war.

Brigadier General Courtney Whitney (left), General of the Army Douglas MacArthur (seated) and Major General Edward Almond (right) observe the shelling of Inchon from the USS Mount McKinley. (U.S. Army, Nutter, Wikimedia Commons)

By the time the U.S. forces drove the North Korean army back across the border between the two Koreas, roughly 5,000 American troops had been killed. The Pentagon had plenty of warning that the Chinese would intervene if the U.S. Army pushed too close to the Chinese border. But the euphoria that erupted after Inchon blew away all common sense and drowned out the military voices who warned of a catastrophe. One U.S. Army colonel responded to a briefing on the Korea situation in Tokyo in 1950 by storming out and declaring, Theyre living in a goddamn dream land.

The Chinese military attack resulted in the longest retreat in the history of Americas armed forces a debacle that was valorized by allusion in the 1986 Clint Eastwood movie,Heartbreak Ridge.By 1951, the Korean War had become intensely unpopular in the United States more unpopular than the Vietnam War ever was. At least the war, which President Harry Truman insisted on mislabeling as a police action, destroyed the presidency of the man who launched it. By the time a ceasefire was signed in mid 1953, almost 40,000 Americans had been killed in a conflict that ended with borders similar to those at the start of the war.

Disasters

Perhaps the biggest disaster of the Korean war was that intellectuals and foreign-policy experts succeeded in redefining the Korean conflict as an American victory. As Georgetown University professor Derek Leebaert noted in his bookMagic and Mayhem,What had been regarded as a bloody stalemate transformed itself in Washingtons eyes; ten years later it had become an example of a successful limited war. Already by the mid-1950s, elite opinion began to surmise that it had been a victory. Leebaert explained, Images of victory in Korea shaped the decision to escalate in 1964-65 helping to explain why America pursued a war of attrition.

Even worse, the notion that America has never lost a war remained part of the national myth, and the notion of having prevailed in Korea became a justification for going big in Vietnam. But as Leebaert noted, in Vietnam, [the U.S. Army] had forgotten everything it had learned about counterinsurgency in Korea as well.

When the American media noted the 70th anniversary of the start of the war this past June, they paid little or no attention to the wars dark side. The media ignored perhaps the wars most important lesson: the U.S. government has almost unlimited sway to hide its own war crimes.

During the Korean War, Americans were deluged with official pronouncements that the U.S. military was taking all possible steps to protect innocent civilians. Because the evils of communism were self-evident, few questions arose about how the United States was thwarting Red aggression. When a U.S. Senate subcommittee appointed in 1953 by Sen. Joseph McCarthy investigated Korean War atrocities, the committee explicitly declared that war crimes were defined as those acts committed by enemy nations.

In 1999, 46 years after the cease fire in Korea, the Associated Press exposed a 1950 massacre of Korean refugees at No Gun Ri. U.S. troops drove Koreans out of their village and forced them to remain on a railroad embankment. Beginning on July 25, 1950, the refugees were strafed by U.S. planes and machine guns over the following three days. Hundreds of people, mostly women and children, were killed. The 1999 AP story was widely denounced by American politicians and some media outlets as a slander on American troops.

The Pentagon promised an exhaustive investigation. In January 2001, the Pentagon released a 300-page report purporting to prove that the No Gun Ri killings were merely an unfortunate tragedy caused by trigger-happy soldiers frightened by approaching refugees.

President Bill Clinton announced his regret that Korean civilians lost their lives at No Gun Ri. In an interview, he was asked why he used regret instead of apology. He declared, I believe that the people who looked into it could not conclude that there was a deliberate act, decided at a high-enough level in the military hierarchy, to acknowledge that, in effect, the Government had participated in something that was terrible. Clinton specified that there was no evidence of wrongdoing high-enough in the chain of command in the Army to say that, in effect, the Government was responsible.

2008 photo showing concrete abutment outside the No Gun Ri bridge, where investigators white paint identifies bullet marks and embedded fragments from U.S. Army gunfire in the 1950 shooting of South Korean refugees. (Cjthanley, CC BY-SA 3.0, Wikimedia Commons)

But the atrocities against civilians had been common knowledge among U.S. troops 50 years earlier. As Charles Hanley, Sang-Hun Choe and Martha Mendoza noted in their 2001 book, The Bridge at No Gun Ri,the Pentagon in 1952 withdrew official endorsement from RKOsOne Minute to Zero,a Korean War movie in which an Army colonel played by actor Robert Mitchum orders artillery fire on a column of refugees. The Pentagon fretted that this sequence could be utilized for anti-American propaganda and banned the film from being shown on U.S. military bases.

South Koreans fleeing south in mid-1950 after the North Korean army invaded. (U.S. Defense Department, Wikimedia Commons)

In 2005, Sahr Conway-Lanz, a Harvard University doctoral student, discovered a letter in the National Archives from the U.S. ambassador to Korea, John Muccio, sent to Assistant Secretary of State Dean Rusk on the day the No Gun Ri massacre commenced. Muccio summarized a new policy from a meeting between U.S. military and South Korean officials: If refugees do appear from north of U.S. lines they will receive warning shots, and if they then persist in advancing they will be shot. The new policy was radioed to Army units around Korea on the morning the No Gun Ri massacre began. The U.S. military feared that North Korean troops might be hiding amidst the refugees. The Pentagon initially claimed that its investigators never saw Muccios letter but it was in the specific research file used for its report.

Slaughtering Civilians

Unidentified unit of 1st Cavalry Division withdraws southward, July 29, 1950, the day a division battalion pulled back from No Gun Ri after killing large numbers of trapped South Korean refugees there. (U.S. Army, Wikimedia Commons)

Conway-Lanzs 2006 bookCollateral Damage: Americans, Noncombatant Immunity, and Atrocity after World War IIquoted an official U.S. Navy history of the first six months of the Korean War stating that the policy of strafing civilians was wholly defensible. An official Army history noted, Eventually, it was decided to shoot anyone who moved at night. A report for the aircraft carrier USSValley Forgejustified attacking civilians because the Army insisted that groups of more than eight to ten people were to be considered troops, and were to be attacked.

In 2007, the Army recited its original denial: No policy purporting to authorize soldiers to shoot refugees was ever promulgated to soldiers in the field. But the Associated Press exposed more dirt from the U.S. archives: More than a dozen documents in which high-ranking U.S. officers tell troops that refugees are fair game, for example, and order them to shoot all refugees coming across river were found by the AP in the investigators own archived files after the 2001 inquiry. None of those documents was disclosed in the Armys 300-page public report. A former Air Force pilot told investigators that his plane and three others strafed refugees at the same time of the No Gun Ri massacre; the official report claimed that all pilots interviewed knew nothing about such orders. Evidence also surfaced of massacres like No Gun Ri. On Sept. 1, 1950, the destroyer USSDeHaven,at the Armys insistence, fired on a seaside refugee encampment at Pohang, South Korea. Survivors say 100 to 200 people were killed.

In this July 1950 U.S. Army file photograph once classified top secret, South Korean soldiers walk among some of the thousands of South Korean political prisoners shot at Taejon, South Korea, early in the Korean War. (Major Abbott/U.S. Army, Wikimedia Commons)

Slaughtering civilians en masse became routine procedure after the Chinese army intervened in the Korean war in late 1950. MacArthur spoke of turning North Korean-held territory into a desert. The U.S. military eventually expanded its definition of a military target to any structure that could shelter enemy troops or supplies. Gen. Curtis LeMay summarized the achievements: We burned down every town in North Korea and some in South Korea, too. A million civilians may have been killed during the war. A South Korean government Truth and Reconciliation Commission uncovered many previously unreported atrocities and concluded that American troops killed groups of South Korean civilians on 138 separate occasions during the Korean War, TheNew York Timesreported.

Truth delayed is truth defused. The Pentagon strategy on Korean War atrocities succeeded because it left facts to the historians, not the policymakers. The truth about No Gun Ri finally slipped out 10 presidencies later. Even more damaging, the Rules of Engagement for killing Korean civilians were covered up for four more U.S. wars. If U.S. policy for slaying Korean refugees had been exposed during that war, it might have curtailed similar killings in Vietnam (many of which were not revealed until decades after the war).

Former congressman and decorated Korean War veteran Pete McCloskey (R-Calif.) warned, The government will always lie about embarrassing matters.

The same shenanigans permeate other U.S. wars. The secrecy and deceit surrounding U.S. warring has had catastrophic consequences in this century. The Bush administration exploited the 9/11 attacks to justify attacking Iraq in 2003, and it was not until 2016 that the U.S. government revealed documents exposing the Saudi governments role in financing the 9/11 hijackers (15 of 19 whom were Saudi citizens). The Pentagon covered up the vast majority of U.S. killings of Iraqi civilians until Bradley Manning and WikiLeaks exposed them in 2010. There are very likely reams of evidence of duplicity and intentional slaughter of civilians in U.S. government files on its endlessly confused and contradictory Syrian intervention.

When politicians or generals appear itching to pull the United States into another foreign war, remember that truth is routinely the first casualty. It is naive to expect a government that recklessly slays masses of civilians to honestly investigate itself and announce its guilt to the world. Self-government is a mirage if Americans do not receive enough information to judge killings committed in their name.

James Bovard is a policy adviser to The Future of Freedom Foundation. He is aUSA Todaycolumnist and has written forThe New York Times, The Wall Street Journal, The Washington Post, New Republic, Readers Digest, Playboy, American Spectator, Investors Business Daily,and many other publications. He is the author ofFreedom Frauds: Hard Lessons in American Liberty(2017, published by FFF);Public Policy Hooligan(2012);Attention Deficit Democracy(2006);The Bush Betrayal(2004);Terrorism and Tyranny(2003);Feeling Your Pain(2000);Freedom in Chains(1999);Shakedown(1995);Lost Rights(1994);The Fair Trade Fraud(1991); andThe Farm Fiasco(1989). He was the 1995 co-recipient of the Thomas Szasz Award for Civil Liberties work, awarded by the Center for Independent Thought, and the recipient of the 1996 Freedom Fund Award from the Firearms Civil Rights Defense Fund of the National Rifle Association. His bookLost Rightsreceived the Mencken Award as Book of the Year from the Free Press Association. HisTerrorism and Tyrannywon Laissez Faire Books Lysander Spooner award for the Best Book on Liberty in 2003. Read hisblog. Send himemail.

This article was originally published in the September 2020 edition ofFuture of Freedomand on the authors blog and is reprinted with the his permission.

The views expressed are solely those of the author and may or may not reflect those ofConsortium News.

Please Contributeto Consortium News

Donate securely with

Click on Return to PayPal here.

Or securely by credit card or check by clicking the red button:

Read this article:
The Korean War's Forgotten Lessons on the Evil of Intervention - Consortium News

The 12 Coolest Machine-Learning Startups Of 2020 – CRN

Learning Curve

Artificial intelligence has been a hot technology area in recent years and machine learning, a subset of AI, is one of the most important segments of the whole AI arena.

Machine learning is the development of intelligent algorithms and statistical models that improve software through experience without the need to explicitly code those improvements. A predictive analysis application, for example, can become more accurate over time through the use of machine learning.

But machine learning has its challenges. Developing machine-learning models and systems requires a confluence of data science, data engineering and development skills. Obtaining and managing the data needed to develop and train machine-learning models is a significant task. And implementing machine-learning technology within real-world production systems can be a major hurdle.

Heres a look at a dozen startup companies, some that have been around for a few years and some just getting off the ground, that are addressing the challenges associated with machine learning.

AI.Reverie

Top Executive: Daeil Kim, Co-Founder, CEO

Headquarters: New York

AI.Reverie develops AI and machine -earning technology for data generation, data labeling and data enhancement tasks for the advancement of computer vision. The companys simulation platform is used to help acquire, curate and annotate the large amounts of data needed to train computer vision algorithms and improve AI applications.

In October AI.Reverie was named a Gartner Cool Vendor in AI core technologies.

Anodot

Top Executive: David Drai, Co-Founder, CEO

Headquarters: Redwood City, Calif.

Anodots Deep 360 autonomous business monitoring platform uses machine learning to continuously monitor business metrics, detect significant anomalies and help forecast business performance.

Anodots algorithms have a contextual understanding of business metrics, providing real-time alerts that help users cut incident costs by as much as 80 percent.

Anodot has been granted patents for technology and algorithms in such areas as anomaly score, seasonality and correlation. Earlier this year the company raised $35 million in Series C funding, bringing its total funding to $62.5 million.

BigML

Top Executive: Francisco Martin, Co-Founder, CEO

Headquarters: Corvallis, Ore.

BigML offers a comprehensive, managed machine-learning platform for easily building and sharing datasets and data models, and making highly automated, data-driven decisions. The companys programmable, scalable machine -earning platform automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery and topic modeling tasks.

The BigML Preferred Partner Program supports referral partners and partners that sell BigML and oversee implementation projects. Partner A1 Digital, for example, has developed a retail application on the BigML platform that helps retailers predict sales cannibalizationwhen promotions or other marketing activity for one product can lead to reduced demand for other products.

StormForge

Top Executive: Matt Provo, Founder, CEO

Headquarters: Cambridge, Mass.

StormForge provides machine learning-based, cloud-native application testing and performance optimization software that helps organizations optimize application performance in Kubernetes.

StormForge was founded under the name Carbon Relay and developed its Red Sky Ops tools that DevOps teams use to manage a large variety of application configurations in Kubernetes, automatically tuning them for optimized performance no matter what IT environment theyre operating in.

This week the company acquired German company Stormforger and its performance testing-as-a-platform technology. The company has rebranded as StormForge and renamed its integrated product the StormForge Platform, a comprehensive system for DevOps and IT professionals that can proactively and automatically test, analyze, configure, optimize and release containerized applications.

In February the company said that it had raised $63 million in a funding round from Insight Partners.

Comet.ML

Top Executive: Gideon Mendels, Co-Founder, CEO

Headquarters: New York

Comet.ML provides a cloud-hosted machine-learning platform for building reliable machine-learning models that help data scientists and AI teams track datasets, code changes, experimentation history and production models.

Launched in 2017, Comet.ML has raised $6.8 million in venture financing, including $4.5 million in April 2020.

Dataiku

Top Executive: Florian Douetteau, Co-Founder, CEO

Headquarters: New York

Dataikus goal with its Dataiku DSS (Data Science Studio) platform is to move AI and machine-learning use beyond lab experiments into widespread use within data-driven businesses. Dataiku DSS is used by data analysts and data scientists for a range of machine-learning, data science and data analysis tasks.

In August Dataiku raised an impressive $100 million in a Series D round of funding, bringing its total financing to $247 million.

Dataikus partner ecosystem includes analytics consultants, service partners, technology partners and VARs.

DotData

Top Executive: Ryohei Fujimaki, Founder, CEO

Headquarters: San Mateo, Calif.

DotData says its DotData Enterprise machine-learning and data science platform is capable of reducing AI and business intelligence development projects from months to days. The companys goal is to make data science processes simple enough that almost anyone, not just data scientists, can benefit from them.

The DotData platform is based on the companys AutoML 2.0 engine that performs full-cycle automation of machine-learning and data science tasks. In July the company debuted DotData Stream, a containerized AI/ML model that enables real-time predictive capabilities.

Eightfold.AI

Top Executive: Ashutosh Garg, Co-Founder, CEO

Headquarters: Mountain View, Calif.

Eightfold.AI develops the Talent Intelligence Platform, a human resource management system that utilizes AI deep learning and machine-learning technology for talent acquisition, management, development, experience and diversity. The Eightfold system, for example, uses AI and ML to better match candidate skills with job requirements and improves employee diversity by reducing unconscious bias.

In late October Eightfold.AI announced a $125 million Series round of financing, putting the startups value at more than $1 billion.

H2O.ai

Top Executive: Sri Ambati, Co-Founder, CEO

Headquarters: Mountain View, Calif.

H2O.ai wants to democratize the use of artificial intelligence for a wide range of users.

The companys H2O open-source AI and machine-learning platform, H2O AI Driverless automatic machine-learning software, H20 MLOps and other tools are used to deploy AI-based applications in financial services, insurance, health care, telecommunications, retail, pharmaceutical and digital marketing.

H2O.ai recently teamed up with data science platform developer KNIME to integrate Driverless AI for AutoMl with KNIME Server for workflow management across the entire data science life cyclefrom data access to optimization and deployment.

Iguazio

Top Executive: Asaf Somekh, Co-Founder, CEO

Headquarters: New York

The Iguazio Data Science Platform for real-time machine learning applications automates and accelerates machine-learning workflow pipelines, helping businesses develop, deploy and manage AI applications at scale that improve business outcomeswhat the company calls MLOps.

In early 2020 Iguazio raised $24 million in new financing, bringing its total funding to $72 million.

OctoML

Top Executive: Luis Ceze, Co-Founder, CEO

Headquarters: Seattle

OctoMLs Software-as-a-Service Octomizer makes it easier for businesses and organizations to put deep learning models into production more quickly on different CPU and GPU hardware, including at the edge and in the cloud.

OctoML was founded by the team that developed the Apache TVM machine-learning compiler stack project at the University of Washingtons Paul G. Allen School of Computer Science & Engineering. OctoMLs Octomizer is based on the TVM stack.

Tecton

Top Executive: Mike Del Balso, Co-Founder, CEO

Headquarters: San Francisco

Tecton just emerged from stealth in April 2020 with its data platform for machine learning that enables data scientists to turn raw data into production-ready machine-learning features. The startups technology is designed to help businesses and organizations harness and refine vast amounts of data into the predictive signals that feed machine-learning models.

The companys three founders: CEO Mike Del Balso, CTO Kevin Stumpf and Engineering Vice President Jeremy Hermann previously worked together at Uber where they developed the companys Michaelangelo machine-learning platform the ride-sharing company used to scale its operations to thousands of production models serving millions of transactions per second, according to Tecton.

The company started with $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia.

Read more from the original source:
The 12 Coolest Machine-Learning Startups Of 2020 - CRN

Commentary: Pathmind applies AI, machine learning to industrial operations – FreightWaves

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

In this installment of the AI in Supply Chain series (#AIinSupplyChain), we explore how Pathmind, an early-stage startup based in San Francisco, is helping companies apply simulation and reinforcement learning to industrial operations.

I asked Chris Nicholson, CEO and founder of Pathmind, What is the problem that Pathmind solves for its customers? Who is the typical customer?

Nicholson said: The typical Pathmind customer is an industrial engineer working at a simulation consulting firm or on the simulation team of a large corporation with industrial operations to optimize. This ranges from manufacturing companies to the natural resources sector, such as mining and oil and gas. Our clients build simulations of physical systems for routing, job scheduling or price forecasting, and then search for strategies to get more efficient.

Pathminds software is suited for manufacturing resource management, energy usage management optimization and logistics optimization.

As with every other startup that I have highlighted as a case in this #AIinSupplyChain series, I asked, What is the secret sauce that makes Pathmind successful? What is unique about your approach? Deep learning seems to be all the rage these days. Does Pathmind use a form of deep learning? Reinforcement learning?

Nicholson responded: We automate tasks that our users find tedious or frustrating so that they can focus on whats interesting. For example, we set up and maintain a distributed computing cluster for training algorithms. We automatically select and tune the right reinforcement learning algorithms, so that our users can focus on building the right simulations and coaching their AI agents.

Echoing topics that we have discussed in earlier articles in this series, he continued: Pathmind uses some of the latest deep reinforcement learning algorithms from OpenAI and DeepMind to find new optimization strategies for our users. Deep reinforcement learning has achieved breakthroughs in gaming, and it is beginning to show the same performance for industrial operations and supply chain.

On its website, Pathmind describes saving a large metals processor 10% of its expenditures on power. It also describes the use of its software to increase ore preparation by 19% at an open-pit mining site.

Given how difficult it is to obtain good quality data for AI and machine learning systems for industrial settings, I asked how Pathmind handles that problem.

Simulations generate synthetic data, and lots of it, said Slin Lee, Pathminds head of engineering. The challenge is to build a simulation that reflects your underlying operations, but there are many tools to validate results.

Once you pass the simulation stage, you can integrate your reinforcement learning policy into an ERP. Most companies have a lot of the data they need in those systems. And yes, theres always data cleansing to do, he added.

As the customer success examples Pathmind provides on its website suggest, mining companies are increasingly looking to adopt and implement new software to increase efficiencies in their internal operations. This is happening because the industry as a whole runs on very old technology, and deposits of ore are becoming increasingly difficult to access as existing mines reach maturity. Moreover, the growing trend toward the decarbonization of supply chains, and the regulations that will eventually follow to make decarbonization a requirement, provide an incentive for mining companies to seize the initiative in figuring out how to achieve that goal by implementing new technology

The areas in which AI and machine learning are making the greatest inroads are mineral exploration using geological data to make the process of seeking new mineral deposits less prone to error and waste; predictive maintenance and safety using data to preemptively repair expensive machinery before breakdowns occur; cyberphysical systems creating digital models of the mining operation in order to quickly simulate various scenarios; and autonomous vehicles using autonomous trucks and other autonomous vehicles and machinery to move resources within the area in which mining operations are taking place.

According to Statista, The revenue of the top 40 global mining companies, which represent a vast majority of the whole industry, amounted to some 692 billion U.S. dollars in 2019. The net profit margin of the mining industry decreased from 25 percent in 2010 to nine percent in 2019.

The trend toward mining companies and other natural-resource-intensive industries adopting new technology is going to continue. So this is a topic we will continue to pay attention to in this column.

Conclusion

If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains, wed love to tell your story at FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Dig deeper into the #AIinSupplyChain Series with FreightWaves:

Commentary: Optimal Dynamics the decision layer of logistics? (July 7)

Commentary: Combine optimization, machine learning and simulation to move freight (July 17)

Commentary: SmartHop brings AI to owner-operators and brokers (July 22)

Commentary: Optimizing a truck fleet using artificial intelligence (July 28)

Commentary: FleetOps tries to solve data fragmentation issues in trucking (Aug. 5)

Commentary: Bulgarias Transmetrics uses augmented intelligence to help customers (Aug. 11)

Commentary: Applying AI to decision-making in shipping and commodities markets (Aug. 27)

Commentary: The enabling technologies for the factories of the future (Sept. 3)

Commentary: The enabling technologies for the networks of the future (Sept. 10)

Commentary: Understanding the data issues that slow adoption of industrial AI (Sept. 16)

Commentary: How AI and machine learning improve supply chain visibility, shipping insurance (Sept. 24)

Commentary: How AI, machine learning are streamlining workflows in freight forwarding, customs brokerage (Oct. 1)

Commentary: Can AI and machine learning improve the economy? (Oct. 8)

Commentary: Savitude and StyleSage leverage AI, machine learning in fashion retail (Oct. 15)

Commentary: How Japans ABEJA helps large companies operationalize AI, machine learning (Oct. 26)

Authors disclosure: I am not an investor in any early-stage startups mentioned in this article, either personally or through REFASHIOND Ventures. I have no other financial relationship with any entities mentioned in this article.

View post:
Commentary: Pathmind applies AI, machine learning to industrial operations - FreightWaves

Before machine learning can become ubiquitous, here are four things we need to do now – SiliconANGLE News

It wasnt too long ago that concepts such as communicating with your friends in real time through text or accessing your bank account information all from a mobile device seemed outside the realm of possibility. Today, thanks in large part to the cloud, these actions are so commonplace, we hardly even think about these incredible processes.

Now, as we enter the golden age of machine learning, we can expect a similar boom of benefits that previously seemed impossible.

Machine learning is already helping companies make better and faster decisions. In healthcare, the use of predictive models created with machine learning is accelerating research and discovery of new drugs and treatment regiments. In other industries, its helping remote villages of Southeast Africa gain access to financial services and matching individuals experiencing homelessness with housing.

In the short term, were encouraged by the applications of machine learning already benefiting our world. But it has the potential to have an even greater impact on our society. In the future, machine learning will be intertwined and under the hood of almost every application, business process and end-user experience.

However, before this technology becomes so ubiquitous that its almost boring, there are four key barriers to adoption we need to clear first:

The only way that machine learning will truly scale is if we as an industry make it easier for everyone regardless of skill level or resources to be able to incorporate this sophisticated technology into applications and business processes.

To achieve this, companies should take advantage of tools that have intelligence directly built into applications from which their entire organization can benefit. For example, Kabbage Inc., a data and technology company providing small business cash flow solutions, used artificial intelligence to adapt and help processquickly an unprecedented number of small business loans and unemployment claims caused by COVID-19 while preserving more than 945,000 jobs in America. By folding artificial intelligence into personalization, document processing, enterprise search, contact center intelligence, supply chain or fraud detection, all workers can benefit from machine learning in a frictionless way.

As processes go from manual to automatic, workers are free to innovate and invent, and companies are empowered to be proactive instead of reactive. And as this technology becomes more intuitive and accessible, it can be applied to nearly every problem imaginable from the toughest challenges in the information technology department to the biggest environmental issues in the world.

According to the World Economic Forum, the growth of AI could create 58 million net new jobs in the next few years. However, research suggests that there are currently only 300,000 AI engineers worldwide, and AI-related job postings are three times that of job searches with a widening divergence.

Given this significant gap, organizations need to recognize that they simply arent going to be able to hire all the data scientists they need as they continue to implement machine learning into their work. Moreover, this pace of innovation will open doors and ultimately create jobs we cant even begin to imagine today.

Thats why companies around the world such asMorningstar, Liberty MutualandDBS Bank are finding innovative ways to encourage their employees to gain new machine learning skills with a fun, interactive hands-on approach. Its critical that organizations should not only direct their efforts towards training the workforce they have with machine learning skills, but also invest in training programs that develop these important skills in the workforce of tomorrow.

With anything new, often people are of two minds: Either an emerging technology is a panacea and global savior, or it is a destructive force with cataclysmic tendencies. The reality is, more often than not, a nuance somewhere in the middle. These disparate perspectives can be reconciled with information, transparency and trust.

As a first step, leaders in the industry need to help companies and communities learn about machine learning, how it works, where it can be applied and ways to use it responsibly, and understand what it is not.

Second, in order to gain faith in machine learning products, they need to be built by diverse groups of people across gender, race, age, national origin, sexual orientation, disability, culture and education. We will all benefit from individuals who bring varying backgrounds, ideas and points of view to inventing new machine learning products.

Third, machine learning services should be rigorously tested, measuring accuracy against third party benchmarks. Benchmarks should be established by academia, as well as governments, and be applied to any machine learning-based service, creating a rubric for reliable results, as well as contextualizing results for use cases.

Finally, as a society, we need to agree on what parameters should be put in place governing how and when machine learning can be used. With any new technology, there has to be a balance in protecting civil rights while also allowing for continued innovation and practical application of the technology.

Any organization working with machine learning technology should be engaging customers, researchers, academics and others to determine the benefits of its machine learning technology along with the potential risks. And they should be in active conversation with policymakers, supporting legislation, and creating their own guidelines for the responsible use of machine learning technology. Transparency, open dialogue and constant evaluation must always be prioritized to ensure that machine learning is applied appropriately and is continuously enhanced.

Through machine learning weve already accomplished so much, and yet its still day one (and we havent even had a cup of coffee yet!). If were using machine learning to help endangered orangutans, just imagine how it could be used to help save and preserve our oceans and marine life. If were using this technology to create digital snapshots of the planets forests in real-time, imagine how it could be used to predict and prevent forest fires. If machine learning can be used to help connect small-holding farmers to the people and resources they need to achieve their economic potential, imagine how it could help end world hunger.

To achieve this reality, we as an industry have a lot of work ahead of us. Im incredibly optimistic that machine learning will help us solve some of the worlds toughest challenges and create amazing end-user experiences weve never even dreamed. Before we know it, machine learning will be as familiar as reaching for our phones.

Swami Sivasubramanianis vice president of Amazon AI, running AI and machine learning services for Amazon Web Services Inc. He wrote this article for SiliconANGLE.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Here is the original post:
Before machine learning can become ubiquitous, here are four things we need to do now - SiliconANGLE News

Artificial Intelligence and Machine Learning, 5G and IoT will be the Most Important Technologies in 2021, According to new IEEE Study – PRNewswire

PISCATAWAY, N.J., Nov. 19, 2020 /PRNewswire/ --IEEE, the world's largest technical professional organization dedicated to advancing technology for humanity, today released the results of a survey of Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) in the U.S., U.K., China, India and Brazil regarding the most important technologies for 2021 overall, the impact of the COVID-19 pandemic on the speed of their technology adoption and the industries expected to be most impacted by technology in the year ahead.

2021 Most Important Technologies and ChallengesWhich will be the most important technologies in 2021? Among total respondents, nearly one-third (32%) say AI and machine learning, followed by 5G (20%) and IoT (14%).

Manufacturing (19%), healthcare (18%), financial services (15%) and education (13%) are the industries that most believe will be impacted by technology in 2021, according to CIOs and CTOS surveyed. At the same time, more than half (52%) of CIOs and CTOs see their biggest challenge in 2021 as dealing with aspects of COVID-19 recovery in relation to business operations. These challenges include a permanent hybrid remote and office work structure (22%), office and facilities reopenings and return (17%), and managing permanent remote working (13%). However, 11% said the agility to stop and start IT initiatives as this unpredictable environment continues will be their biggest challenge. Another 11% cited online security threats, including those related to remote workers, as the biggest challenge they see in 2021.

Technology Adoption, Acceleration and Disaster Preparedness due to COVID-19CIOs and CTOs surveyed have sped up adopting some technologies due to the pandemic:

The adoption of IoT (42%), augmented and virtual reality (35%) and video conferencing (35%) technologies have also been accelerated due to the global pandemic.

Compared to a year ago, CIOs and CTOs overwhelmingly (92%) believe their company is better prepared to respond to a potentially catastrophic interruption such as a data breach or natural disaster. What's more, of those who say they are better prepared, 58% strongly agree that COVID-19 accelerated their preparedness.

When asked which technologies will have the greatest impact on global COVID-19 recovery, one in four (25%) of those surveyed said AI and machine learning,

CybersecurityThe top two concerns for CIOs and CTOs when it comes to the cybersecurity of their organization are security issues related to the mobile workforce including employees bringing their own devices to work (37%) and ensuring the Internet of Things (IoT) is secure (35%). This is not surprising, since the number of connected devices such as smartphones, tablets, sensors, robots and drones is increasing dramatically.

Slightly more than one-third (34%) of CIO and CTO respondents said they can track and manage 26-50% of devices connected to their business, while 20% of those surveyed said they could track and manage 51-75% of connected devices.

About the Survey"The IEEE 2020 Global Survey of CIOs and CTOs" surveyed 350 CIOs or CTOs in the U.S., China, U.K., India and Brazil from September 21 - October 9, 2020.

About IEEEIEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics

SOURCE IEEE

https://www.ieee.org

Original post:
Artificial Intelligence and Machine Learning, 5G and IoT will be the Most Important Technologies in 2021, According to new IEEE Study - PRNewswire

DIY Camera Uses Machine Learning to Audibly Tell You What it Sees – PetaPixel

Adafruit Industries has created a machine learning camera built with the Raspberry Pi that can identify objects extremely quickly and audibly tell you what it sees. The group has listed all the necessary parts you need to build the device at home.

The camera is based on Adafruits BrainCraft HAT add-on for the Raspberry Pi 4, and uses TensorFlow Lite object recognition software to be able to recognize what it is seeing. According to Adafruits website, its compatible with both the 8-megapixel Pi camera and the 12.3-megapixel interchangeable lens version of module.

While interesting on its own, DIY Photography makes a solid point by explaining a more practical use case for photographers:

You could connect a DSLR or mirrorless camera from its trigger port into the Pis GPIO pins, or even use a USB connection with something like gPhoto, to have it shoot a photo or start recording video when it detects a specific thing enter the frame.

A camera that is capable of recognizing what it is looking at could be used to only take a photo when a specific object, animal, or even a person comes into the frame. That would mean it could have security system or wildlife monitoring applications. Whenever you might wish your camera knew what it was looking at, this kind of technology would make that a reality.

You can find all the parts you will need to build your own version of this device on Adafruits website here. They also have published an easy machine learning guide for the Raspberry Pi as well as a guide on running TensorFlow Lite.

(via DPReview and DIY Photography)

Continue reading here:
DIY Camera Uses Machine Learning to Audibly Tell You What it Sees - PetaPixel

The way we train AI is fundamentally flawed – MIT Technology Review

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training testsuggesting that they were equally accuratetheir performance varied wildly in the stress test.

The stress test used ImageNet-C, a dataset of images from ImageNet that have been pixelated or had their brightness and contrast altered, and ObjectNet, a dataset of images of everyday objects in unusual poses, such as chairs on their backs, upside-down teapots, and T-shirts hanging from hooks. Some of the 50 models did well with pixelated images, some did well with the unusual poses; some did much better overall than others. But as far as the standard training process was concerned, they were all the same.

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. It pokes some significant holes in the fundamental assumptions we've been making.

DAmour agrees. The biggest, immediate takeaway is that we need to be doing a lot more testing, he says. That wont be easy, however. The stress tests were tailored specifically to each task, using data taken from the real world or data that mimicked the real world. This is not always available.

Some stress tests are also at odds with each other: models that were good at recognizing pixelated images were often bad at recognizing images with high contrast, for example. It might not always be possible to train a single model that passes all stress tests.

One option is to design an additional stage to the training and testing process, in which many models are produced at once instead of just one. These competing models can then be tested again on specific real-world tasks to select the best one for the job.

Thats a lot of work. But for a company like Google, which builds and deploys big models, it could be worth it, says Yannic Kilcher, a machine-learning researcher at ETH Zurich. Google could offer 50 different versions of an NLP model and application developers could pick the one that worked best for them, he says.

DAmour and his colleagues dont yet have a fix but are exploring ways to improve the training process. We need to get better at specifying exactly what our requirements are for our models, he says. Because often what ends up happening is that we discover these requirements only after the model has failed out in the world.

Getting a fix is vital if AI is to have as much impact outside the lab as it is having inside. When AI underperforms in the real-world it makes people less willing to want to use it, says co-author Katherine Heller, who works at Google on AI for healthcare: We've lost a lot of trust when it comes to the killer applications, thats important trust that we want to regain.

See the original post here:
The way we train AI is fundamentally flawed - MIT Technology Review

Utilizing machine learning to uncover the right content at KMWorld Connect 2020 – KMWorld Magazine

At KMWorld Connect 2020 David Seuss, CEO, Northern Light, Sid Probstein, CTO, Keeeb, and Tom Barfield, chief solution architect, Keeb discussed Machine Learning & KM.

KMWorld Connect, November 16-19, and its co-located events, covers future-focused strategies, technologies, and tools to help organizations transform for positive outcomes.

Machine learning can assist KM activities in many ways. Seuss discussed using a semantic analysis of keywords in social posts about a topic of interest to yield clear guidance as to which terms have actual business relevance and are therefore worth investing in.

What are we hearing from our users? Seuss asked. The users hate the business research process.

By using AstraZeneca as an example, Seuss started the analysis of the companys conference presentations. By looking at the topics, Diabetes sank lower as a focus of AstraZenicas focus.

When looking at their twitter account, themes included oncology, COVID-19, and environmental issues. Not one reference was made to diabetes, according to Seuss.

Social media is where the energy of the company is first expressed, Seuss said.

An instant news analysis using text analytics tells us the same story: no mention of diabetes products, clinical trials, marketing, etc.

AI-based automated insight extraction from 250 AstraZeneca oncolcogy conference presentations gives insight into R&D focus.

Let the machine read the content and tell you what it thinks is important, Seuss said.

You can do that with a semantic graph of all the ideas in the conference presentations. Semantic graphs look for relationships between ideas and measure the number and strength of the relationships. Google search results are a real-world example of this in action.

We are approaching the era when users will no longer search for information, they will expect the machine to analyze and then summarize for them what they need to know, Seuss said. Machine-based techniques will change everything.

Probstein and Barfield addressed new approaches to integrate knowledge sharing into work. They looked at collaborative information curation so end users help identify the best content, allowing KM teams to focus on the most strategic knowledge challenges as well as the pragmatic application of AI through text analytics to improve both curation and findability and improve performance.

The super silo is on the rise, Probstein said. It stores files, logs, customer/sales and can be highly variable. He looked at search results for how COVID-19 is having an impact on businesses.

Not only are there many search engines, each one is different, Probstein said.

Probstein said Keeeb can help with this problem. The solution can search through a variety of data sources to find the right information.

One search, a few seconds, one pane of glass, Probstein said. Once you solve the search problem, now you can look through the documents.

Knowledge isnt always a whole document, it can be a few paragraphs or an image, which can then be captured and shared through Keeeb.

AI and machine learning can enable search to be integrated with existing tools or any system. Companies should give end-users simple approaches to organize with content-augmented with AI-benefitting themselves and others, Barfield said.

Read more here:
Utilizing machine learning to uncover the right content at KMWorld Connect 2020 - KMWorld Magazine