Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence – Imaging Technology News

December 27, 2019 Artificial intelligence (AI) technology developed by the RIKEN Center for Advanced Intelligence Project (AIP) in Japan has successfully found features in pathology images from human cancer patients, without annotation, that could be understood by human doctors. Further, the AI identified features relevant to cancer prognosis that were not previously noted by pathologists, leading to a higher accuracy of prostate cancer recurrence compared to pathologist-based diagnosis. Combining the predictions made by the AI with predictions by human pathologists led to an even greater accuracy.

According to Yoichiro Yamamoto, M.D., Ph.D., the first author of the study published in Nature Communications, "This technology could contribute to personalized medicine by making highly accurate prediction of cancer recurrence possible by acquiring new knowledge from images. It could also contribute to understanding how AI can be used safely in medicine by helping to resolve the issue of AI being seen as a 'black box.'"

The research group led by Yamamoto and Go Kimura, in collaboration with a number of university hospitals in Japan, adopted an approach called "unsupervised learning." As long as humans teach the AI, it is not possible to acquire knowledge beyond what is currently known. Rather than being "taught" medical knowledge, the AI was asked to learn using unsupervised deep neural networks, known as autoencoders, without being given any medical knowledge. The researchers developed a method for translating the features found by the AI only numbers initially into high-resolution images that can be understood by humans.

To perform this feat the group acquired 13,188 whole-mount pathology slide images of the prostate from Nippon Medical School Hospital (NMSH), The amount of data was enormous, equivalent to approximately 86 billion image patches (sub-images divided for deep neural networks), and the computation was performed on AIP's powerful RAIDEN supercomputer.

The AI learned using pathology images without diagnostic annotation from 11 million image patches. Features found by AI included cancer diagnostic criteria that have been used worldwide, on the Gleason score, but also features involving the stroma connective tissues supporting an organ in non-cancer areas that experts were not aware of. In order to evaluate these AI-found features, the research group verified the performance of recurrence prediction using the remaining cases from NMSH (internal validation). The group found that the features discovered by the AI were more accurate (AUC=0.820) than predictions made based on the human-established cancer criteria developed by pathologists, the Gleason score (AUC=0.744). Furthermore, combining both AI-found features and the human-established criteria predicted the recurrence more accurately than using either method alone (AUC=0.842). The group confirmed the results using another dataset including 2,276 whole-mount pathology images (10 billion image patches) from St. Marianna University Hospital and Aichi Medical University Hospital (external validation).

"I was very happy," said Yamamoto, "to discover that the AI was able to identify cancer on its own from unannotated pathology images. I was extremely surprised to see that AI found features that can be used to predict recurrence that pathologists had not identified."

He continued, "We have shown that AI can automatically acquire human-understandable knowledge from diagnostic annotation-free histopathology images. This 'newborn' knowledge could be useful for patients by allowing highly-accurate predictions of cancer recurrence. What is very nice is that we found that combining the AI's predictions with those of a pathologist increased the accuracy even further, showing that AI can be used hand-in-hand with doctors to improve medical care. In addition, the AI can be used as a tool to discover characteristics of diseases that have not been noted so far, and since it does not require human knowledge, it could be used in other fields outside medicine."

For more information:www.riken.jp/en/research/labs/aip/

Go here to see the original:

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence - Imaging Technology News

Artificial Intelligence in Transportation Market 2020: New Innovative Solutions to Boost Global Growth with New Technology, Business Strategies,…

Global Artificial Intelligence in Transportation Market Research Report 2020-2029 is a vast research database spread across various pages with numerous tables, charts, and figures in it, which provides a complete data on the Artificial Intelligence in Transportation market including key components such as main players, size, SWOT analysis, business situation, and best patterns in the market. This analysis report contains different expectations identified with income, generation, CAGR, consumption, cost, and other generous elements. Further, the report determines the opportunities, its restraints as well as analysis of the technical barriers, other issues, and cost-effectiveness affecting the market during the forecast period from 2020 to 2029. It features historical & visionary cost, an overview with growth analysis, demand and supply data. Market trends by application global market based on technology, product type, application, and various processes are analyzed in Artificial Intelligence in Transportation industry report.

The Top Players Functioning in the Artificial Intelligence in the Transportation market are ZF Friedrichshafen AG, Robert Bosch GmbH, Continental AG, Valeo SA, Tesla Inc, NVIDIA Corporation, Intel Corporation, Microsoft Corporation, Alphabet Inc, Qlik Technologies Inc.

To obtain all-inclusive information on forecast analysis of global Artificial Intelligence in the Transportation Market, request a Free PDF brochure here: https://marketresearch.biz/report/artificial-intelligence-in-transportation-market/request-sample

Gathering information about Artificial Intelligence in the Transportation Industry and its Forecast to 2029 is the main objective of this report. Predicting the strong future growth of the Artificial Intelligence in the Transportation Market in all its geographical and product segments has been the oriented goal of our market analysis report. The Artificial Intelligence in Transportation market research gathers data about the customers, marketing strategy, competitors. The Artificial Intelligence in Transportation The manufacturing industry is becoming increasingly dynamic and innovative, with more private players enrolling in the industry.

Identifying The Basic Business Drivers, Challenges, And Tactics Adopted:

Market estimations are constructed for the key market segments between 2020 and 2029. Artificial Intelligence in Transportation report provides an extensive analysis of the current and emerging market trends and dynamics.

An overview of the different applications, business areas, and the latest trends observed in the Artificial Intelligence in Transportation industry has been covered by this study.

Key market players within the market are profiled in Artificial Intelligence in Transportation report and their strategies are analyzed, to provide the competitive outlook of the industry.

Various challenges overlooking the business and the numerous strategies employed by the industry players for successful marketing of the product have also been included.

Market Segmentation Based on offering, machine learning technology, application, process, and region:

Segmentation by Offering: Hardware Software Segmentation by Machine Learning Technology: Deep Learning Computer Vision Context Awareness NLP Segmentation by Application: Semi & Full-Autonomous HMI Platooning Segmentation by Process: Data Mining Image Recognition Signal Recognition

Furthermore, Artificial Intelligence in Transportation industry report covers chapters such as regions by product/application where each region and its countries are categorized and explained in brief covering: North America, Europe, South America, Asia Pacific, and the Middle East and Africa.

Inquire/Speak To Expert for Further Detailed Information About Artificial Intelligence in Transportation Report: https://marketresearch.biz/report/artificial-intelligence-in-transportation-market/#inquiry

Five Important Points the Report Offers:

Benchmarking: It includes functional benchmarking, process benchmarking, and competitive benchmarking

Market assessment: It involves market entry strategy, market feasibility analysis, and market forecasting or sizing

Corporate Intelligence: It contains custom intelligence, competitor intelligence, and market intelligence

Strategy Analysis: It includes analysis of indirect and direct sales channels, helps you to plan the right distribution strategy, and understand your customers

Technological Intelligence: It helps you to investigate future technology roadmaps, choose the right technologies, and determine feasible technology options

The following years taken into consideration in this research to forecast the global Artificial Intelligence in Transportation market size are as follows:

Base Year: 2019 | Estimated Year: 2020 | Forecast Year: 2020 to 2029

TOC of Artificial Intelligence in Transportation Market Report Includes:

1. Industry Overview of Artificial Intelligence in Transportation

2. Industry Chain Analysis of Artificial Intelligence in Transportation

3. Manufacturing Technology of Artificial Intelligence in Transportation

4. Major Manufacturers Analysis of Artificial Intelligence in Transportation

5. Global Productions, Revenue, and Price Analysis of Artificial Intelligence in Transportation by Regions, Creators, Types and Applications

6. Global and Foremost Regions Capacity, Production, Revenue and Growth Rate of Artificial Intelligence in Transportation

7. Consumption Value, Consumption Volumes, Import, Export and Trade Price Study of Artificial Intelligence in Transportation by Regions

8. Gross Margin Examination of Artificial Intelligence in Transportation

9. Marketing Traders or Distributor Examination of Artificial Intelligence in Transportation

10. Global Impacts on Artificial Intelligence in Transportation Industry

11. Development Trend Analysis of Artificial Intelligence in Transportation

12. Contact information of Artificial Intelligence in Transportation

13. New Project Investment Feasibility Analysis of Artificial Intelligence in Transportation

14. Conclusion of the Global Artificial Intelligence in Transportation Industry 2020 Market Research Report

CONTINUE

What makes us different from our competitors?

Compared to our competitors, our offerings include, but are not limited to market research services on the latest industry trends, the customized study on any niche/specific requirement at a reasonable price, and database greater than our competitors and give progressively applicable outcomes to meet your requirements.

Customization Service of the Report:

Marketresearch.biz provides customization of reports as per your need. This report can be personalized to meet your requirements. Get in touch with our sales team, who will guarantee you to get a report that suits your necessities.

About Us

MarketResearch.biz is a global market research and consulting service provider specialized in offering a wide range of business solutions to their clients including market research reports, primary and secondary research, demand forecasting services, focus group analysis and other services. We understand that how data is important in todays competitive environment and thus, we have collaborated with industrys leading research providers who work continuously to meet the ever-growing demand for market research reports throughout the year.

Contact Us:

Mr. Benni Johnson

Prudour Pvt. Ltd.

420 Lexington Avenue, Suite 300 New York City, NY 10170, United States

Tel: + 1-347-826-1876

Email ID: [emailprotected]

Website: https://marketresearch.biz/

Thank you for going through this article, we also provide separate customize chapter-wise section or region-wise report editions.

This content has been distributed via WiredRelease press release distribution service. For press release service enquiry, please reach us at [emailprotected].

WiredRelease

Visit WiredRelease's Website

Read the original:

Artificial Intelligence in Transportation Market 2020: New Innovative Solutions to Boost Global Growth with New Technology, Business Strategies,...

Close to Home | Artificial Intelligence and us – Mindanao Times

A curious event in 2018 when Facebooks AI robots started communicating among themselves with their own language which programming experts on Facebook could not understand had the company shutting down their bots and no news of having opened it yet again.

This happening with Facebook is not isolated. The AlphaGo incident where a professional human player of the game was beaten by a robot sent alarm to human beings concerned. AlphaGo is a computer program that plays the Chinese board game called Go. It was in March 2016 that AlphaGo beat Gos Best Player Lee Sedol in the game where combinations of moves are said to be as many as the stars in the universe.

A workshop called Sprit, Science, and Artificial Intelligence, facilitated by social scientist and activist Nicanor Perlas, discusses why these leaps on technology and the seemingly subsequent surrender of humans to it should be something that must be met with our full consciousness and wakefulness. With the technologys lure of conveniences and perfection, we human beings are slowly giving up our inherent capacities.

While we are so out to believe that technology is neutral, we need to dig deeper to come face to face with its inner logic. I used to say, and these days, I hear many say often, that technology is neither good or bad. That it is up to us to make it advantageous or otherwise. It sure looks like that on the surface level. But just beneath it, lies the inner logic of technology: if we look deeper, technology makes us give up our inherent capacities as human beings.

So then, it is best to ask ourselves: IF we have been created in the Divines image and likeness, what could be our inherent capacities? What could have been planted in us if we are to be the true and full human beings that the Divine intends us to be?

Technology tycoon Elon Musk has been very vocal about the threats to humanity that pervade with the proliferation of Artificial Intelligence (AI). Recently, Tesla, Musks company, released all its patents to help humanity survive. In first world countries, it is said that teachers are being replaced by robots and computers. If you have seen Jimmy Falons interview with the robot named Sophia, it looks amusing at a glance, but if given a thought, one might think of the danger with human resources being replaced by these technological devices that seem to supersede us in terms of intelligence.

The lure of technology in the name of AI (now gearing up to be Artificial Super Intelligence, [ASI]) has three facets: Super Health, Super Intelligence, and Immortality. This has been created by AI companies to counter human imperfections. People get sick, so they offer super health, people often think at a very slow pace and what poor memory, so they offer super intelligence. Finally, people die, so they offer immortality. The proponents of these ideas think that consciousness resides on the brain, and so by creating artificial bodies that can accommodate the consciousness in the brain, thus making immortality at hand. They call this Transhumanism.

This Transhumanism gear of AI is slowly leading the human being to mass extinction as this would mean no more reproduction as humans. Are we, as humanity, moving like the sleeping children who were led by the Pied Piper of Hamelin to the abyss?

We will be convinced of this technology hype if we do not stop and discern over this lures. But taking time to contemplate over these things, it will be revealed that all these lures are become effective once we deviate from the path of nature. Mother Nature has been cradling our humanity.

For our super health, we have the plants and the four elements to take care of us. For our super intelligence, our thoughts and capacity to create has been inherent in us, provided that we commune with her. It is almost forgotten but our ancestors showed super intelligence. There were documents that tell of our ancestors being capable of telepathy and teleportation. Yes, truth is stranger than fiction. And finally, for our concern of death. We only need to be assured that we no longer die. A Spiritual Being in the name of Christ had long defeated death for us. But if we only look at it in a materialist perspective, we cannot have the faculties to understand it. Nonetheless, it still needs to be said.

While AI poses an abominable threat to the existence of humanity, there is no need to fear. The call is to face this task wide awake and conscious. The AI has become a dragon to defeat because we have not been living up to who were truly are. Now, this dragon wants us to show our courage and together brave this challenge for the future of our humanity. Many, I, myself included, believe that by going back to nature, the human being will make manifest once more that no one is stronger than us except God. Simply because, we are the summit of His creations His own image and likeness.

See original here:

Close to Home | Artificial Intelligence and us - Mindanao Times

China should step up regulation of artificial intelligence in finance, think tank says – msnNOW

Jason Lee/REUTERS A Chinese flag flutters in front of the Great Hall of the People in Beijing, China, May 27, 2019. REUTERS/Jason Lee

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

"We should not deify artificial intelligence as it could go wrong just like any other technology," said the former chief of China's securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

"The point is how we make sure it is safe for use and include it with proper supervision," Xiao told a forum in Qingdao on China's east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

China's P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

"Changes have to be made among policy makers," said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

"We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the country's development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector."

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

(Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing)

Read more:

China should step up regulation of artificial intelligence in finance, think tank says - msnNOW

China’s Cryptography Law: Embarking on a New Era of Cryptography Regulation [Brought to you by JunHe] – Legal Business Online

Fang Zhou and Yue (Brett) Zhang of JunHe review the old regulatory system of cryptography law and how the newly introduced PRC Cryptography Law brings a more flexible approach for cipher products to flourish in China

On Oct. 26, 2019, the Standing Committee of the National Peoples Congress passed the PRC Cryptography Law (Cryptography Law) (). The Cryptography Law is the first cryptography statute at a national law level, replacing old regulations that have been in use for more than two decades. By introducing major changes to the existing regulatory regime on cipher products, the Cryptography Law marks a new era of cipher regulation in China, particularly in relation to the use of cipher in commercial environment.

The Cryptography Law introduces classification of ciphers based on the nature of the data or information they intend to protect:

The Cryptography Law does not stipulate general approval procedure for R&D, manufacture, sale, use and import of plain commercial cipher products. Instead, the Cryptography Law plans to regulate commercial cipher products by way of formulating and implementing relevant national standards, which presumably should be technical in nature and may also incorporate overseas or internationally recognized standards with necessary localization. As exceptions, the Cryptography Law permits the PRC government to establish import/export approval for commercial cipher products that concern Chinas national security, public interests and welfare, or international commitments made by the PRC government.

The Cryptography Law makes it clear that a non-discriminatory principle should be observed in relation to foreign invested companies and their activities involving commercial cipher. It further requires that no PRC authority may force any foreign invested companies to disclose or transfer their commercial cipher by administrative measures.

Despite the general non-approval approach, the Cryptography Law provides certain exceptional cases where commercial cipher products may be subject to additional certification or assessment requirements:

LOOKING FORWARD

However, it remains to be seen how the PRC authority will define exceptional cases through future implementation of the Cryptography Law, particularly with respect to the scope of key terms such as national security or public welfare. Giving these exceptional cases a broad scope means more commercial cipher products would need to obtain certification or pass review or assessment procedures supervised by the authorities. This may, to some extent, undermine the benefit of the non-approval principle set out by the Cryptography Law because such procedures may be structured as a modified form of governmental approval.

More here:
China's Cryptography Law: Embarking on a New Era of Cryptography Regulation [Brought to you by JunHe] - Legal Business Online

Cryptographic Security Market To 2025 Emerging Niche Segments And Regional Markets – Market Research Sheets

Technological evolution in computer, information can be transferred in digital way has increased rapidly. So, there are so many applications such as data processing systems, electronic mail systems, and bank system. In these applications the transferred information must pass through communications channels that can be monitored by electronic auditor. While the degree of security may be different for different application, as it should generally pass important information directly from sender to a deliberate receiver intermediate parties being able to explicate the transferred message and without any loss of information.

Furthermore, information that is saved in memory bank of computer must be secure from threats. Cryptographic security is used to transfer a message between remote locations and to send information from one end to another end every system should include at least one encoding devices at one location and one decoding device at a second location. Cryptographic security decoding and encoding technology are available to protect the authentication and privacy for communication devices.

Technological development and the need for remote access security and wireless communication is increased due to this cryptography security application that provides security and protection on the adoption and decency of the data and network. The ongoing advancements is increasing continuously in the internet, technology and the development of new computers to support remote computation has led to increase in the requirement of network security for the secure data transmission.

Planning To Lay Down Future Strategy? Request Sample https://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=35813

Some of the main challenging factors are low customer awareness about cryptography security, and lack of expertise and skilled manpower are obstructing the growth of the Cryptographic security market. The cryptography security market is witnessing a stable growth with rising security threats, lack of the ability to acknowledge such attacks. The quality performance of cryptographic security is depend on the complication of the decoding and encoding devices. The problem regarding privacy of communication for a system where an auditor can listen to every transmitted message on the communication channel.

The Cryptographic security market can be segmented by hardware, services, organization size, application and geographical regions. By Hardware, the market can be segmented into blade, server, random number generator and research & development platform. The hardware is the main equipment of the Cryptographic security to make possible effective content transfer with secure system. The Most of the vendors are updating their hardware setup to maintain a competition in the Cryptographic market. By Services, the market can be bifurcated into consulting services, support and maintenance services and integration and deployment Services.

Request To Access Market Data Cryptographic Security Market

By Organization, the cryptographic security market can be segmented as large enterprises and small & medium-sized enterprises. According to the application, the cryptographic security market segment can be bifurcated as IT & telecom, network security, government & defense, database security, consumer goods & retail, healthcare & life sciences, banking, financial services & insurance and others. Furthermore, the Cryptographic security market can also be studied according to regional bifurcations such as North America, Europe, Asia Pacific, Middle East & Africa and South America. Moreover, with the feeling of enchanting experience, increase in demand, growth of technology and advancement in security system is expected to positively support the growth of cryptographic security market during the forecast periods. The cryptographic security market has seen huge growth in defense and banking industry in recent years.

Many players are involved in the Cryptographic Security market with wider solution portfolio. Some of the key players in the cryptographic security market are Crypta Labs, IBM, HP, Id Quantique, Magiq Technologies, NEC Corporation, Infineon, Mitsubishi, Nucrypt, Qutools, Qasky, PQ Solutions, Qubitekk, Quintessencelabs, and Toshiba among others. Most of these providers are headquartered in North America region. Most of the companies to upgrade their research and development activities to introduce innovations and security methods in this field.

This post was originally published on Market Research Sheets

Read more here:
Cryptographic Security Market To 2025 Emerging Niche Segments And Regional Markets - Market Research Sheets

Give the gift of privacy settings this year – Inverse

Ever since your parents got smartphones, it was game over. Now when you go home for the holiday season, the roles have reversed with parents constantly squinting at their bright phone screens and meticulously typing with one finger at a time.

Older family members have become just as addicted to their smartphones as the younger generation. However, some of them may not be as aware of the privacy risks associated with these devices.

If youre not careful about privacy settings, smartphones can track your location, phone applications can have access to your camera and microphone at all times and constantly collect your data to target you with certain ads. So, it was no coincidence that three different brands advertised snakeskin boots on your timeline after you wouldnt stop texting your friends to ask if you could pull them off.

Most of these features can be disabled by enabling the privacy settings of the phone, and checking what apps are allowed to access even when they are not in use.

While you are gathered with your family during this holiday break, give them the gift of privacy, by following a few simple steps as illustrated by Twitter user Matthew Green who also runs a blog on cryptography.

Dont find my iPhone

Without GPS on our smartphones, we would probably all be going around in circles. However, as convenient as it is, GPS tracking is also one of the more invasive features of our devices. Not only does your phone track where you are at all times, but apps also have access to that information and often share it with third-party companies.

In order to disappear off their corporate grid, go to settings, privacy then location services. On there, you will find a list of apps that have requested to access your location. For ones that dont necessarily require that information, like Facebook, switch the setting to Never but for the ones that do, including delivery or taxi services, then switch it to While using the app.

Privacy, please

On that privacy tab, there are also settings related to your camera and microphone. When you take a picture, your location is embedded into that photos metadata which means that someone could find out where you are through that photo. You can disable apps access to the camera through the camera settings on the privacy tab.

Youll find that plenty of apps have also requested access to your microphone, so disable those through the microphone settings as well because YouTube does not need to listen to you at any point.

The same goes for Bluetooth settings, which apps may use to track your phone as well.

Stop the ads

If you scroll way down to the bottom of the privacy settings, the very last tab is Advertising. One thing most people are not aware of is that the iPhone gives you the discrete option to limit advertising targeted at you.

If you switch on the tab, Limit Ad Tracking, then your phone will do just that. That doesnt mean you wont receive ads on your phone anymore, but it will limit the amount of ads that are targeted directly at you by collecting your information.

Originally posted here:
Give the gift of privacy settings this year - Inverse

Blockchain This Week: Farmers Kids Win Blockchain Hackathon, Blockchain Fights Deforestation & More – Inc42 Media

Out of $8.5 Mn blockchain investments, India has been at less than 0.2%

Zubi-IBCOL plans to help students apply cryptography and blockchain solutions to real-world problems across India

Nigeria's MIPAD to use blockchain technology, artificial intelligence and data science to identify and geo-tag planted trees

In recent times, blockchain technology has been gaining a lot of momentum across various industry verticals. Most certainly, the trend is shifting from the pilot stage to actual use cases. Nearly, 50% of the blockchain projects are driven by startups. For the ecosystem to thrive in the long run, it requires the support of all the stakeholders involved, including government, investors, innovators and entrepreneurs.

According to NASSCOM Avasant India Blockchain Report 2019, the investments through venture capital firms (VCs) or initial coin offerings (ICOs) in the blockchain ecosystem in India are at less than 0.2% out of $8.5 Mn globally. The drop in the investment collides with the uncertain policy and regulatory norms in the country.

This cautious regulatory environment in India is hindering the investment opportunities for both domestic and global investors into Indian startups. Surprisingly, several Indian-based investors are raising funds through VCs and ICOs in other jurisdictions such as Malta, Singapore, UK, Switzerland and others due to open regulatory environments.

Moreover, the uncertainties or risk around blockchain in India has made it difficult for startups to enter the radar of global investors that are specifically looking to invest in blockchain startups developing innovative products or solutions.

Graph Of The Week: Size of the blockchain technology market (2018-2023)

Global blockchain technology revenue is expected to see massive growth in the coming years. Currently, at the size of $2.2 Bn, the market is expected to touch $23.3 Bn by 2023.

(Source: Statista 2019)

Here are the biggest block-related headlines from across the world.

A blockchain community platform Zubi partners with International Blockchain Olympiad (IBCOL) to enable its users to apply cryptography and blockchain solutions to real-world problems across India. Through this collaboration, Zubis community of students and blockchain enthusiasts will leverage IBCOls resources to achieve a decentralised future.

With this, Zubi-IBCOL has started a National Chapter in India (IN-BCOL). The IN-BCOL will be responsible for selecting top-projects for the final round of the IBCOL, which will be held in Hong Kong. Both the parties believe in educating and encouraging people to build a sustainable blockchain talent ecosystem. Most importantly, the duo aims to promote awareness on blockchain technology and its applications and enhance employability by equipping participants with necessary skills.

At Indias largest artificial intelligence (AI), machine learning (ML) and blockchain hackathon held in Pune from December 18 to December 22 organised by Icertis, BlockchainMegaminds won an all-paid trip to Seattle along with INR 5 lakh grand prize.

Interestingly, the winning team members all hailed from agriculture backgrounds. The team utilised ML models to build an app that can analyse crop distress and weather patterns. Additionally, they harnessed blockchain-enabled smart contracts for instant and automated claim settlements to those adversely affected by crop failures and natural disasters.

Team Boopalan were the runners-up with INR 3 lakh cash prize, followed by Team Heuristic at third place, who won INR 2 lakh.

Nigerias Most Influential People of Africa Descent (MIPAD), through its social impact initiative My Roots in Africa Project, will be planting more than 200 Mn trees by 2024 before the end of the UN International Decade for People of Africa Descent. The company has partnered with Decagon Institute to deploy artificial intelligence and data science to identify and geo-tag trees planted using blockchain technology.

Through this initiative, people can place a request to have a tree named, planted or gifted in honour of themselves or anyone they love. This platform is said to bring transparency and enable users who have planted the trees to know the exact location and be able to see it using satellite imagery using Google Maps. This, in a way, helps prevent allocation of the same tree to more than one person and bring down deforestation.

The Blockchain World Forum (BWF) will be held in Dubai from February 27-28, 2020. The event gives all the industry stakeholders to explore the opportunities and challenges associated with blockchain ecosystem. The platform will enable leading technologists, entrepreneurs, regulators, investors and financial institutions in the emerging blockchain industry.

In addition to this, BWF will be giving out the Blockchain Innovation Awards for the highest achievements from the global blockchain industries and entrepreneurs.

Hedera, a multiple blockchain protocols for improving transaction throughput in digital currency, recently announced the launch of Hedera Boost, where it allows startups to plan, build or launch a blockchain-based application using Hedera.

The platform offers technical guidance and ecosystem tools, marketing and business development support and subsidising transaction fees. The platform lets developers design, and test multiple iterations on Hedera Boost. Once the startup is ready to launch its blockchain application, Hedera claims to be funding the project with $1000 worth of Hedera to cover initial transaction costs.

Message from our partner

Excerpt from:
Blockchain This Week: Farmers Kids Win Blockchain Hackathon, Blockchain Fights Deforestation & More - Inc42 Media

Can machine learning take over the role of investors? – TechHQ

As we dive deeper into the Fourth Industrial Revolution, there is no disputing how technology serves as a catalyst for growth and innovation for many businesses across a range of functions and industries.

But one technology that is steadily gaining prominence across organizations includes machine learning (ML).

In the simplest terms, ML is the science of getting computers to learn and act like humans do without being programmed. It is a form of artificial intelligence (AI) and entails feeding machine data, enabling the computer program to learn autonomously and enhance its accuracy in analyzing data.

The proliferation of technology means AI is now commonplace in our daily lives, with its presence in a panoply of things, such as driverless vehicles, facial recognition devices, and in the customer service industry.

Currently, asset managers are exploring the potential that AI/ML systems can bring to the finance industry; close to 60 percent of managers predict that ML will have a medium-to-large impact across businesses.

MLs ability to analyze large data sets and continuously self-develop through trial and error translates to increased speed and better performance in data analysis for financial firms.

For instance, according to the Harvard Business Review, ML can spot potentially outperforming equities by identifying new patterns in existing data sets and examine the collected responses of CEOs in quarterly earnings calls of the S&P 500 companies for the past 20 years.

Following this, ML can then formulate a review of good and bad stocks, thus providing organizations with valuable insights to drive important business decisions. This data also paves the way for the system to assess the trustworthiness of forecasts from specific company leaders and compare the performance of competitors in the industry.

Besides that, ML also has the capacity to analyze various forms of data, including sound and images. In the past, such formats of information were challenging for computers to analyze, but todays ML algorithms can process images faster and better than humans.

For example, analysts use GPS locations from mobile devices to pattern foot traffic at retail hubs or refer to the point of sale data to trace revenues during major holiday seasons. Hence, data analysts can leverage on this technological advancement to identify trends and new areas for investment.

It is evident that ML is full of potential, but it still has some big shoes to fil if it were to replace the role of an investor.

Nishant Kumar aptly explained this in Bloomberg, Financial data is very noisy, markets are not stationary and powerful tools require deep understanding and talent thats hard to get. One quantitative analyst, or quant, estimates the failure rate in live tests at about 90 percent. Man AHL, a quant unit of Man Group, needed three years of workto gain enough confidence in a machine-learning strategy to devote client money to it. It later extended its use to four of its main money pools.

In other words, human talent and supervision are still essential to developing the right algorithm and in exercising sound investment judgment. After all, the purpose of a machine is to automate repetitive tasks. In this context, ML may seek out correlations of data without understanding their underlying rationale.

One ML expert said, his team spends days evaluating if patterns by ML are sensible, predictive, consistent, and additive. Even if a pattern falls in line with all four criteria, it may not bear much significance in supporting profitable investment decisions.

The bottom line is ML can streamline data analysis steps, but it cannot replace human judgment. Thus, active equity managers should invest in ML systems to remain competitive in this innovate or die era. Financial firms that successfully recruit professionals with the right data skills and sharp investment judgment stands to be at the forefront of the digital economy.

Visit link:

Can machine learning take over the role of investors? - TechHQ

Are We Overly Infatuated With Deep Learning? – Forbes

Deep Learning

One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?

The Origins of Deep Learning

AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!

Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.

During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.

Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.

Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.

In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.

Deep Learnings Shortcomings

The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.

The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.

The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.

As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.

Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.

Machine Learning is not (just) Deep Learning

Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.

The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.

Read more:

Are We Overly Infatuated With Deep Learning? - Forbes