Five reasons why your business should adopt open source software – Insider.co.uk

Open source software has changed the computing landscape forever. In just over 25 years, with little fanfare and even less promotion, its been installed on more devices than its proprietary cousins.

Its the backbone of the internet and runs enterprise mission critical services for most of the worlds largest organisations. Its generally seen as more secure, more agile, faster to drive value from, of higher quality andconsiderably less expensive to deploy, scale and maintain than its competitors the standard proprietary software companies.

Open source software is developed by some of the smartest and highest paid software engineers globally and used by the most ambitious and technologically advanced corporations in the world.

If you still dont believe open source is the future, here are five solid business reasons why your organisation should consider it.

Quality and security

All software has bugs, some functional (the software doesnt do what its supposed to) and some security-based (systems are hacked and information stolen). Security through secrecy has been the tradition of proprietary software, ensuring customers cant access the source code. For mission-critical applications like aircraft control systems, there may only be a few hundredpeople in the world who understand how the software is built and can spot flaws.

However, secrecy hasnt stopped corporate hijacking, zero day vulnerabilities, massive data thefts and blackmail by encryption. By making code visible to everyone, open source software like Linux, Android, WordPress andour own SuiteCRM, is viewed daily by hundreds of thousands of software engineers. Flaws are spotted and fixed quickly while improvements, extensions and additional features are rapidly added.

Cost

All software is an investment with associated costs for implementation, training and on-going support. License fees for proprietary software are a substantial upfront and on-going cost, with a host of additional restrictionsand associated fees. Price is often a barrier to scaling it further. In contrast, open source has no licence fees, no restrictions and can mean savings of between tens of thousands to several million pounds for large businesses.

Stability and control

The history of computing is peppered with hostile acquisitions, motivated by a desire to shut down competitors and force customer migration. Open source is the disruptor which cant be acquired or shut down. Its in thepublic domain and will continue to evolve and improve while theres a community of developers working on it. You cant be forced to upgrade either. If youre happy with the software youre using, nobody can make youchange.

Support

Theres a substantive difference between support from open source vendors and proprietary ones. For the former, its an important income stream. In order to maintain customer loyalty, support services need to be of thehighest calibre and highly responsive to customer needs. For the latter, support is often an afterthought as the customer is already locked in.

Freedom of choice

Open source ultimately provides greater freedom. Companies can download it and host the software on their own servers, or keep it in a public, private, or hybrid cloud. It can be accessed as software-as-a-service (SaaS),kept it in its current format or tailored by companies themselves, the vendor or third parties. Its the ultimate freedom.

Dale Murray is CEO at Stirling-based open source software developer SalesAgility

Link:
Five reasons why your business should adopt open source software - Insider.co.uk

What Are The Biggest Open Source Software Companies In The World? – Analytics India Magazine

A large number of multi billion dollar open source companies are functioning in the space of analytics and real-time business intelligence.

If we look at open-source, it seems the idea of creating a business model around it may seem counterintuitive. Yet, more and more startups are moving towards the open-source business model due to its freedom and the collaborative effort it provides. Plus, there can be much more value that startups can derive from providing extra services around the software product.

In this article, we take a look at the most prominent companies which focused on open source as the basis of their growth strategy and became unicorns. We can see that the trend is clear A large number of open source unicorns are functioning in the space of analytics and real-time business intelligence.

Valuation: $30 Billion

Red Hat is the biggest company which deals in open source software for businesses. The company was founded in 1993 and is based in Raleigh, North Carolina in the US. Red Hat is widely known for its enterprise operating system Red Hat Enterprise Linux. The company works on a business model based on open-source software, development within a community, professional quality assurance, and subscription-based customer support. Red Hat makes money on subscriptions for customer services, training, and integration services that help enterprises in utilising their open-source software products.

Red Hat makes, maintains, and contributes to multiple free software projects, which shows its open-source spirit. It has bought many companies with proprietary software product codebases and released the software under open source licenses. As of March 2016, Red Hat is the second-largest corporate contributor to the Linux kernel version 4.14 after Intel. At the end of 2018, IBM announced its intent to acquire the company for $34 billion- IBMs largest acquisition to date.

Valuation: $6.5 Billion

MuleSoft is an open-source company based in San Francisco which provides an integration platform to assist businesses to connect data, applications and devices across on-premises and cloud computing environments. Its open-source product Anypoint Platform, integration products were built to integrate software as a service (SaaS), on-premises software, legacy systems, and more.

MuleSofts Anypoint Platform includes multiple components like Anypoint Design Center for API developers to design and build APIs; Anypoint Exchange, a library for API providers to share APIs, templates, and assets; and finally Anypoint Management Center, a centralised web interface to analyse, manage, and monitor APIs and integrations. On May 2, 2018, Salesforce acquired Mulesoft for $6.5 billion in a cash and stock deal.

Valuation: $6 Billion

Databricks provides a unified data analytics platform, powered by Apache Spark to unify data science, engineering and business. It is a single cloud platform for huge-scale data engineering and collaborative data science workloads. Databricks supports Python, Scala, R, Java and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch and Scikit-learn.

Valuation: $5 Billion

Elastic NV is a search organisation which makes self-managed and SaaS products for use cases including search, logging, security, and analytics use cases. Elastic NV manages free open source Elastic Stack with Elasticsearch- a search engine which provides a distributed, multitenant -capable full-text search engine with an HTTP web interface and schema-free JSON documents. There are also other paid features Elastic Cloud (a family of SaaS solutions including the Elasticsearch Service), and Elastic Cloud Enterprise (ECE).

Elastic open-source search technology is used by eBay, Wikipedia, Yelp, Uber, Lyft, Tinder, and Netflix. Elastic is also implemented in use cases such as application search, site search, enterprise search, logging, infrastructure monitoring, application performance management (APM), security analytics (also used to augment SIEM applications), and business analytics. The Elasticsearch meetup community totals more than 100,000 members.

Elasticsearch is built alongside a data collection and log -parsing engine known as Logstash, an analytics and visualisation platform and Beats, a collection of lightweight data shippers, which are built to be used used as an integrated solution, known to as the Elastic Stack.

Valuation: $ 2.5 Billion

Confluent is an American big data company which is focused on the open-source Apache Kafka, a real-time messaging technology. The company provides Stream Analytics which gives immediate access to significant business intelligence insights to users through real-time data analytics. Kafka began for Linkedin in 2010 to handle all the data flowing through a company and to do it in near real-time. Its streaming data technology processes massive amounts of data in real-time, which is valuable in a data-intensive environment in many companies.

The founders open-sourced technology in 2011. Today, Kafka is mostly used as a central repository of streams, where logs are stored in Kafka for an intermediate period a data cluster for further processing and analysis before the date is routed elsewhere. While the base open-source component remains available for free download, it doesnt include the additional tooling the company has built to make it easier for enterprises to use Kafka. Recent additions include a managed cloud version of the product and a marketplace, Confluent Hub, for sharing extensions to the platform.

Valuation: $ 2 Billion

Founded in 2012, HashiCorp is a software company based in San Francisco, California with a freemium open source business model. HashiCorp provides solutions which help developers, operators and security personnel to provision, secure, run and connects cloud-computing infrastructure.

HashiCorp gives a suite of open-source tools which work to support development and deployment of large-scale service-based software installations. Every solution aims at particular stages in the life cycle of a soft product and seeks to automate it. Hashicorp tools have a plugin-oriented architecture to provide integration with third-party technologies and services. Extra proprietary features for a few of those tools are given commercially and are targeted at enterprise customers.

comments

Read more from the original source:
What Are The Biggest Open Source Software Companies In The World? - Analytics India Magazine

Remember that Sonos speaker you bought a few years back that works perfectly? It’s about to be screwed for… reasons – The Register

Updated Sonos is doubling down on its previously disclosed inclination to drop support for older products that aren't profitable to support.

The Internet-of-Things speaker biz said on Tuesday that it will stop providing software updates for some legacy gear in May some of which are barely five years old. The cessation of service doesn't have any immediate consequences but it dooms older devices to stasis, insecurity, and potential incompatibility as software from Sonos or its partners change.

There is one caveat: customers with a mix of legacy and modern Sonos gear won't be able to run both together once a future update moves modern kit to a new version of the Sonos software. So legacy gear will have to be quarantined on its own network, a capability Sonos intends to facilitate shortly.

Affected products include its original Zone Players (released in 2006), Connect, and Connect:Amp (sold between 2011 and 2015), its first-generation Play:5 (released in 2009), C200 (released 2009), and Bridge (released 2007).

"Today the Sonos experience relies on an interconnected ecosystem, giving you access to more than 100 streaming services, voice assistants, and control options like Apple AirPlay 2," the gizmo maker said in a blog post.

"Without new software updates, access to services and overall functionality of your sound system will eventually be disrupted, particularly as partners evolve their technology."

The phrase "will eventually be disrupted" offers no hint of who might be responsible for said disruption. But the company's recent financial filings explain that Sonos itself has planned for the obsolescence of its products and the discontent of customers.

"We expect that in the near term, this backward compatibility will no longer be practical or cost-effective, and we may decrease or discontinue service for our older products," the manufacturer's Q4 2019 10-K financial filing explains. "If we no longer provide extensive backward capability for our products, we may damage our relationship with our existing customers, as well as our reputation, brand loyalty and ability to attract new customers."

This is the same tech outfit that celebrates its environmental and social responsibilities by encouraging customers to flip a kill switch on older products so they cannot be resold in order to trade-in their bricked kit for a 30 per cent discount on new Sonos gear.

Planned obsolescence is common among software-centric companies like Apple, Google, and Microsoft, which only support products for a set period of time. But it hasn't been the norm for makers of home appliances and consumer electronics, where buyers expect products to last more than a few years or even decades.

With more and more companies embracing software-oriented business models, product expiration dates have spread to other market segments. But consumer expectations, as Sonos anticipated, haven't followed. That's evident in the reactions of some Sonos customers on the company's discussion forum.

"What kind of company just phases out your equipment regardless of how much money you spent on it?" wrote one unidentified keyboard warrior.

"You guys seriously SUCK. All you have done since I invested in your products is destroy them and remove functionality. You offer a pathetic 30 per cent buyback on only some products, when you should be offering 100 per cent buyback on everything. YOU BREAK IT, YOU BUY IT. Im done with you crooks, I hope you get hit with a class action lawsuit and go bankrupt."

That said, it's hard to imagine a better advertisement for open source software.

On Thursday, Sonos CEO Patrick Spence published an open letter promising that legacy Sonos products will continue to get bug fixes and security patches for as long as possible, though not new features. Also, he confirmed that the company is working on a way to split your system so that modern products will work with each other and, separately, legacy products will work with each other.

Sponsored: Detecting cyber attacks as a small to medium business

Follow this link:
Remember that Sonos speaker you bought a few years back that works perfectly? It's about to be screwed for... reasons - The Register

Intel joins CHIPS Alliance to promote Advanced Interface Bus (AIB) as an open standard – Design and Reuse

Open development for SOCs gets major boost with new collaboration

SAN FRANCISCO, Jan. 22, 2020 CHIPS Alliance, the leading consortium advancing common and open hardware for interfaces, processors and systems, today announced industry leading chipmaker Intel as its newest member. Intel is contributing the Advanced Interface Bus (AIB) to CHIPS Alliance to foster broad adoption.

CHIPS Alliance is hosted by the Linux Foundation to foster a collaborative environment to accelerate the creation and deployment of open SoCs, peripherals and software tools for use in mobile, computing, consumer electronics and Internet of Things (IoT) applications. The CHIPS Alliance project develops high-quality open source Register Transfer Level (RTL) code and software development tools relevant to the design of open source CPUs, SoCs, and complex peripherals for Field Programmable Gate Arrays (FPGAs) and custom silicon.

Intel is joining CHIPS Alliance to share the Advanced Interface Bus (AIB) as an open-source, royalty-free PHY-level standard for connecting multiple semiconductor die within the same package. This effort is intended to encourage an industry environment in which silicon IP can be developed using any semiconductor process as a chiplet, and easily integrated with other chiplets into a single device to deliver new levels of functionality and optimization. Broader adoption and support for AIB-enabled chiplets will help device developers grow beyond the limits of traditional monolithic semiconductor manufacturing and reduce the cost of development. Working together, Intel and CHIPS Alliance will encourage the growth of an industry ecosystem which engenders more device innovation via heterogeneous integration.

The AIB specifications and collateral will be further developed in the Interconnects workgroup. The group will begin work imminently to make new contributions to foster increased innovation and adoption. All AIB technical details will be placed in the CHIPS Alliance github. In addition, Intel will have a seat on the governing board of CHIPS Alliance. Go to http://www.chipsalliance.org to learn more about the organization or to join the workgroup mailing list.

We couldnt be more happy to welcome Intel to CHIPS Alliance. said Dr. Zvonimir Bandi, Chairman, CHIPS Alliance, and senior director of next-generation platforms architecture at Western Digital. Intels selection of CHIPS Alliance for the AIB specifications affirms the leading role that the organization impacts for open source hardware and software development tools. We look forward to faster adoption of AIB as an open source chiplet interface.

About the CHIPS Alliance

The CHIPS Alliance is an organization which develops and hosts high-quality, open source hardware code (IP cores), interconnect IP (physical and logical protocols), and open source software development tools for design, verification, and more. The main aim is to provide a barrier-free collaborative environment, to lower the cost of developing IP and tools for hardware development. The CHIPS Alliance is hosted by the Linux Foundation. For more information, visit chipsalliance.org.

About the Linux Foundation

The Linux Foundation was founded in 2000 and has since become the worlds leading home for collaboration on open source software, open standards, open data, and open hardware. Today, the Foundation is supported by more than 1,000 members and its projects are critical to the worlds infrastructure, including Linux, Kubernetes, Node.js and more. The Linux Foundation focuses on employing best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, visit linuxfoundation.org.

Excerpt from:
Intel joins CHIPS Alliance to promote Advanced Interface Bus (AIB) as an open standard - Design and Reuse

Heres what a $10 million lab dedicated to cracking iPhones looks like – 9to5Mac

Kicking off 2020, security and privacy is a hot topic between the latest standoff between Apple and the FBI over the Pensacola incident as well as Apple reportedly abandoning its plan to bring end-to-end encryption to iCloud backups. With an in-depth report on what a robust iPhone cracking operation looks like from the inside, Fast Company shares some fascinating details and photos of NYCs $10 million cyber lab.

Fast Company calls New York CitysHigh Technology Analysis Unit lab ground zero in the encryption battle between US government and tech companies like Apple. And it goes way beyond some third-party devices made by companies like Cellebrite or Grayshift.

The lab has been built by Manhattans cybercrime unit and district attorney Cyrus Vance Jr. and it includes an RF isolation chamber to give them the best chance of cracking iPhones and iPads before alleged criminals can erase them remotely.

The entrance to the radiofrequency isolation chamber, near the middle of the Lefkowitz Building in lower Manhattan, looks like an artifact from the Apollo program, shielded by two airtight, metallic doors that are specially designed to block electromagnetic waves. Inside the room, against one wall, are dozens of Apple iPhones and iPads in various states of disrepair. Some have cracked glass fronts or broken cases. Others look like theyve been fished out of a smoldering campfire. Of course, the devices are not there to be fixed. They are evidence confiscated during the commission of alleged crimes.

The district attorney of Manhattan, Cyrus Vance Jr., and the citys cybercrime unit have built this electronic prison for a very specific purpose: to try, using brute force algorithms, to extract the data on the phones before their owners try to wipe the contents remotely.

The report highlights nearly 3,000 phones waiting to be cracked at the lab whenFast Company visited. TheHigh Technology Analysis Units director, Steven Moran says they have created a special, custom process with open source software to deal with the amount of devices they get and to know what third-party vendors to work with for cracking iPhones.

On the day I visited the cyber lab, there were nearly 3,000 phones, most related to active criminal investigations, that Moran had not yet been able to access. The team has built a proprietary workflow management program, using open source software, to triage the incredible volume of incoming devices and to escalate the most important cases. So if a third party were to say hey, we have a solution that will work on iOS 12.1.2 and it costs X amount of dollars, I can see within five seconds that thats going to affect 16 different phones, Moran says.

After the San Bernardino case, Manhattan district attorney Cyrus Vance Jr. said they decided to build out the high tech lab.

We had to figure out what we were going to do with this new situation over which we had no control, Vance says. So at a cost of some $10 million, Vance decided to build his own high-tech forensics labthe first of its kind within a local prosecutors office.

With that budget, theHigh Technology Analysis Units director, Steven Moran got some seriously powerful hardware, custom software, and a team of security experts.

The labs supercomputer is able to create up to 26 million passcode guesses a second and theres a robot that can remove a memory chip without using heat.

Moran stocked the cyberlab with mind-bending hardware and a crack team of technology experts, many of whom are ex-military. Proprietary software provides prosecutors with real-time information about each smartphone in their possession, which can be removed from the radiofrequency-shielded room using Ramsey boxesminiaturized versions of the isolation chamber that allow technicians to manipulate the devices safely. In other corners of the lab are a supercomputer that can generate 26 million random passcodes per second, a robot that can remove a memory chip without using heat, and specialized tools that can repair even severely damaged devices.

Another interesting statistic, 4 out of 5 smartphones that the DAs office in Manhattan get are now locked, when five years ago, only 52% were.

Five years ago, only 52% of the smartphones that the District Attorneys office obtained were locked. Today, that figure is 82%. Vance says the cybercrime lab is able to successfully crack about half of the phones in his possession, but whenever Apple or Google update their software, they have to adapt.

The Manhattan DA is also aware that the lab hes been able to create isnt a possibility for most cities and highlights his belief that its not the answer.

Vance is careful to say that hes not whining about the problem. He knows he is better off than 99% of the other jurisdictions in the country. Thanks in part to the billions of dollars the city has collected from prosecuting financial crimes on Wall Street, Vance is able to continue operating his $10 million lab. But its not the answer, he says, and its not the answer for the country because we are an office that is uniquely able to pay for expensive services.

In the end, Vance just wants prosecutors to have all the tools available to do their jobs. You entrust us with this responsibility to protect the public, he says. At the same time, theyApple and Google have taken away one of our best sources of information. Just because they say so. Its not that some third party has decided, this is the right thing for Apple and Google to do. They just have done it.

But of course, Apple is likely to change its position or focus on iPhone security and privacy, so the cat and mouse game will continue on.

The full Fast Company piece on Manhattans high tech iPhone cracking lab is definitely worth a read.

Images via Fast Company

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

Go here to see the original:
Heres what a $10 million lab dedicated to cracking iPhones looks like - 9to5Mac

Deltec Bank, Bahamas says the Impact of Quantum Computing in Banking will be huge – Press Release – Digital Journal

Deltec Bank, Quantum Computing can help institutions speed up their transactional activities while making sense of assets that typically seem incongruent.

Technologies based on quantum theory are coming to the financial sector. It is not an if, but a when for banks to begin using this option to evolve current business practices.

Companies like JPMorgan Chase and Barclays have over two years of experience working with IBMs quantum computing technology. The goal of this work is to optimize portfolios for investors, but several additional benefits could come into the industry as banks learn more about it.

Benefits of Quantum Computing in Banking

Quantum computing stayed in the world of academia until recent years when technology developers opened trial opportunities. The banking sector was one of the first to start experimenting with what might be possible.

Their efforts have led to the development of four positive outcomes that can occur because of the faster processing power that quantum computing offers.

1. Big Data Analytics

The high-powered processing capabilities of this technology make it possible for banks to optimize their big data. According to Deltec Bank, Quantum Computing can help institutions speed up their transactional activities while making sense of assets that typically seem incongruent.

2. Portfolio Analysis

Quantum computing permits high-frequency trading activities because it can appraise assets and analyze portfolios to determine individual needs. The creation of algorithms built on the full capabilities of this technology can mine more information to find new pathways to analysis and implementation.

3. Customer Service Improvements

This technology gives banks more access to artificial intelligence and machine learning opportunities. The data collected by institutions can improve customer service by focusing on consumer engagement, risk analysis, and product development. There will be more information available to develop customized financial products that meet individual needs while staying connected to core utilities.

4. Improved Security

The results of quantum computing in banking will create the next generation of encryption and safeguarding efforts to protect data. Robust measures that include encrypted individual identification keys and instant detection of anomalies can work to remove fraudulent transactions.

Privately Funded Research is Changing the Banking Industry

Although some firms are working with IBM and other major tech developers to bring quantum computing to the banking sector, it is private money that funds most of the innovations.

An example of this effort comes from Rigetti Computing. This company offers a product called Forest, which is a downloadable SDK that is useful in the writing and testing of programs using quantum technologies.

1QB Information Technologies in Canada has an SDK that offers the necessary tools to develop and test applications on quantum computers.

How the world approaches banking and finance could be very different in the future because of quantum computing. This technology might not solve every problem the industry faces today, but it can certainly put a significant dent in those issues.

Disclaimer: The author of this text, Robin Trehan, has an Undergraduate degree in economics, Masters in international business and finance and MBA in electronic business. Trehan is Senior VP at Deltec International http://www.deltecbank.com. The views, thoughts, and opinions expressed in this text are solely the views of the author, and not necessarily reflecting the views of Deltec International Group, its subsidiaries and/or employees.

About Deltec Bank

Headquartered in The Bahamas, Deltec is an independent financial services group that delivers bespoke solutions to meet clients unique needs. The Deltec group of companies includes Deltec Bank & Trust Limited, Deltec Fund Services Limited, and Deltec Investment Advisers Limited, Deltec Securities Ltd. and Long Cay Captive Management

Media ContactCompany Name: Deltec International GroupContact Person: Media ManagerEmail: Send EmailPhone: 242 302 4100Country: BahamasWebsite: https://www.deltecbank.com/

Read more from the original source:
Deltec Bank, Bahamas says the Impact of Quantum Computing in Banking will be huge - Press Release - Digital Journal

The Need For Computing Power In 2020 And Beyond – Forbes

Having led a Bitcoin mining firm for over two years, I've come to realize the importance of computing power. Computing power connects the real (chip energy) and virtual (algorithm) dimensions of our world. Under the condition that the ownership of the assets remains unchanged, computing power is an intangible asset that can be used and circulated. It is a commercialized technical service and a consumption investment. This is a remarkable innovation for mankind, and it is an upgrade for the digital economy.

2020 marks the birth year of the computing power infrastructure. Our world is at the beginning of a new economic and technological cycle. We have entered the digital economic civilization. This wave of technology is driven by the combination of AI, 5G, quantum computing, big data and blockchain. People have started realizing that in the age of the digital economy, computing power is the most important and innovative form of productivity.

Computing power is not just technical but also economic innovation. It's a small breakthrough at the fundamental level with impact that will be immeasurable. And people have finally seen the value of the bottom layer through the 10 years of crypto mining evolution.

However, there are two major problems faced by the entire technological landscape: First is insufficient computing power. Second is the dominance of centralized computing power, which creates a monopoly and gives rise to manipulation problems and poor data security.

How does more computing power help?

Artificial Intelligence

Mining Bitcoin has allowed my company to build the foundation of computing infrastructure, so we are planning to eventually expand into AI computing. This experience has further shown me the importance of working toward developing more computing power if tech leaders want to continue creating innovative technologies.

Consider this: For an AI system to recognize someone's voice or identify an animal or a human being, it first needs to process millions of audio, video or image samples. It then learns to differentiate between two different pitches of voices or to differentiate faces based on various facial features. To reach that level of precision, an AI model needs to be fed a tremendous amount of data.

It is only possible to do that if we have powerful computers that can process millions of data points every single second. The more the computing power, the faster we can feed the data to train the AI system, resulting in a shorter span for the AI to reach near-perfection, i.e., human-level intelligence.

The computing power required by AI has been doubling roughly every three and a half months since 2012. The need to build better AI has made it mandatory to keep up with this requirement for more computing power. Tech companies are leaving no stone unturned to rise to this demand.

It is almost as if computing power is now an asset into which investors and organizations are pouring millions of dollars. They are constantly testing and modifying their best chips to produce more productive versions of them. The results of this investment are regularly seen in the form of advanced, more compact chips capable of producing higher computing power while consuming lesser energy.

For new technological breakthroughs, computing power itself has become the new "production material" and "energy." Computing power is the fuel of our technologically advanced society. I've observed it is driving the development in various technological landscapes, such as AI, graphics computing, 5G and cryptocurrency.

Cryptocurrency Mining

Similar to AI, the decentralized digital economy sector also relies on high computing power. Transactions of cryptocurrencies, such as Bitcoin, are validated through a decentralized process called "mining." Mining requires miners across the world to deploy powerful computers to find the solution or the hash to a cryptographic puzzle that proves the legitimacy of each transaction requested on the blockchain.

The bad news, however, is that the reward to mine Bitcoin is halved almost every four years. This means that following May 20, 2020 the next halving date miners who mine Bitcoin would receive half the reward per block compared to what they do now. Two primary factors that compensate for the halving of rewards are an increase in the price of Bitcoin and advanced chips with high computing power.

Miners run not one but multiple high-end graphics processing units to mine Bitcoin, which is an electricity-intensive process. The only way to keep mining profitably is to invest in better chips that produce more computing power with lower electricity consumption. This helps miners process more hashes per second (i.e., the hashrate) to get to the right hash and attain the mining reward.

So far, mining chip producers have delivered the promise of more efficient chips leading to an increase in the mining hashrate from 50 exahashes per second to 90 exahashes per second in the past six months. Per the reports, the efficiency of the latest chips combined with increased Bitcoin prices has helped keep the mining business highly profitable since the previous halving.

High computing power has become an addiction we humans are not getting rid of in the foreseeable future. With our growing fondness for faster computer applications and more humanlike AI, it's likely that we demand faster and more perfect versions of the systems we use today. A viable way to fulfill this would be to produce more computing power.

The two biggest challenges that lie in our way are producing clean electricity at lower costs and developing chips that have a lower electricity-consumption-to-computing-power-production ratio. The core of industrial production competition today lies in the cost of producing electricity. Low energy prices enable us to provide stable services. For example, there is an abundance of hydro-electric power in southwest China, and cooperative data centers are located there so they can harness the hydropower.

If we could make low-cost, clean energy available everywhere, we'd cut the cost of producing computing power. When this energy is used by power-efficient computing chips, the total cost drops even more and high computing power becomes highly affordable.

See the rest here:
The Need For Computing Power In 2020 And Beyond - Forbes

LIVE FROM DAVOS: Henry Blodget leads panel on the next decade of tech – Business Insider Nordic

The past decade saw technological advancements that transformed how we work, live, and learn. The next one will bring even greater change as quantum computing, cloud computing, 5G, and artificial intelligence mature and proliferate. These changes will happen rapidly, and the work to manage their impact will need to keep pace.

This session at the World Economic Forum, in Davos, Switzerland, brought together industry experts to discuss how these technologies will shape the next decade, followed by a panel discussion about the challenges and benefits this era will bring and if the world can control the technology it creates.

Henry Blodget, CEO, cofounder, and editorial director, Insider Inc.

This interview is part of a partnership between Business Insider and Microsoft at the 2020 World Economic Forum. Business Insider editors independently decided on the topics broached and questions asked.

Below, find each of the panelists' most memorable contributions:

Julie Love, senior director of quantum business development, Microsoft Microsoft

Julie Love believes global problems such as climate change can potentially be solved far more quickly and easily through developments in quantum computing.

She said: "We [Microsoft] think about problems that we're facing: problems that are caused by the destruction of the environment; by climate change, and [that require] optimization of our natural resources, [such as] global food production."

"It's quantum computing that really a lot of us scientists and technologists are looking for to solve these problems. We can have the promise of solving them exponentially faster, which is incredibly profound. And that the reason is this: [quantum] technology speaks the language of nature.

"By computing the way that nature computes, there's so much information contained in these atoms and molecules. Nature doesn't think about a chemical reaction; nature doesn't have to do some complex computation. It's inherent in the material itself.

Love claimed that, if harnessed in this way, quantum computing could allow scientists to design a compound that could remove carbon from the air. She added that researchers will need to be "really pragmatic and practical about how we take this from, from science fiction into the here-and-now."

Justine Cassell, a professor specializing in AI and linguistics. YouTube/Business Insider

"I believe the future of AI is actually interdependence, collaboration, and cooperation between people and systems, both at the macro [and micro] levels," said Cassell, who is also a faculty member of the Human-Computer Interaction Institute at Carnegie Mellon University.

"At the macro-level, [look], for example, at robots on the factory floor," she said. "Today, there's been a lot of fear about how autonomous they actually are. First of all, they're often dangerous. They're so autonomous, you have to get out of their way. And it would be nice if they were more interdependent if we could be there at the same time as they are. But also, there is no factory floor where any person is autonomous.

In Cassell's view, AI systems could also end up being built collaboratively with experts from non-tech domains, such as psychologists.

"Today, tools [for building AI systems] are mostly machine learning tools," she noted. "And they are, as you've heard a million times, black boxes. You give [the AI system] lots of examples. You say: 'This is somebody being polite. That is somebody being impolite. Learn about that.' But when they build a system that's polite, you don't know why they did that.

"What I'd like to see is systems that allow us to have these bottom-up, black-box approaches from machine learning, but also have, for example, psychologists in there, saying 'that's not actually really polite,' or 'it's polite in the way that you don't ever want to hear.'"

Microsoft president Brad Smith. YouTube/Business Insider

"One thing I constantly wish is that there was a more standardized measurement for everybody to report how much they're spending per employee on employee training because that really doesn't exist, when you think about it," said Smith, Microsoft's president and chief legal officer since 2015.

"I think, anecdotally, one can get a pretty strong sense that if you go back to the 1980s and 1990s employers invested a huge amount in employee training around technology. It was teaching you how to use MS-DOS, or Windows, or how to use Word or Excel interestingly, things that employers don't really feel obliged to teach employees today.

"Learning doesn't stop when you leave school. We're going to have to work a little bit harder. And that's true for everyone.

He added that this creates a further requirement: to make sure the skills people do pick up as they navigate life are easily recognizable by other employers.

"Ultimately, there's a wide variety of post-secondary credentials. The key is to have credentials that employers recognize as being valuable. It's why LinkedIn and others are so focused on new credentialing systems. Now, the good news is that should make things cheaper. It all should be more accessible.

"But I do think that to go back to where I started employers are going to have to invest more [in employee training]. And we're going to have to find some ways to do it in a manner that perhaps is a little more standardized."

Nokia president and CEO, Rajeev Suri. YouTube/Business Insider

Suri said 5G will be able to help develop industries that go far beyond entertainment and telecoms, and will impact physical or manual industries such as manufacturing.

"The thing about 5G is that it's built for machine-type communications. When we received the whole idea of 5G, it was 'how do we get not just human beings to interact with each other, but also large machines," he said.

"So we think that there is a large economic boost possible from 5G and 5G-enabled technologies because it would underpin many of these other technologies, especially in the physical industries."

Suri cited manufacturing, healthcare, and agriculture as just some of the industries 5G could help become far more productive within a decade.

He added: "Yes, we'll get movies and entertainment faster, but it is about a lot of physical industries that didn't quite digitize yet. Especially in the physical industries, we [Nokia] think that the [productivity] gains could be as much as 35% starting in the year 2028 starting with the US first, and then going out into other geographies, like India, China, the European Union, and so on.

View post:
LIVE FROM DAVOS: Henry Blodget leads panel on the next decade of tech - Business Insider Nordic

Federated machine learning is coming – here’s the questions we should be asking – Diginomica

A few years ago, I wondered how edge data would ever be useful given the enormous cost of transmitting all the data to either the centralized data center or some variant of cloud infrastructure. (It is said that 5G will solve that problem).

Consider, for example, applications of vast sensor networks that stream a great deal of data at small intervals. Vehicles on the move are a good example.

There is telemetry from cameras, radar, sonar, GPS and LIDAR, the latter about 70MB/sec. This could quickly amount to four terabytes per day (per vehicle). How much of this data needs to be retained? Answers I heard a few years ago were along two lines:

My counterarguments at the time were:

Introducing TensorFlow federated, via The TensorFlow Blog:

This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn't it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what's been learned?

Since I looked at this a few years ago, the distinction between an edge device and a sensor has more or less disappeared. Sensors can transmit via wifi (though there is an issue of battery life, and if they're remote, that's a problem); the definition of the edge has widened quite a bit.

Decentralized data collection and processing have become more powerful and able to do an impressive amount of computing. The case is point in Intel's Introducing the Intel Neural Compute Stick 2 computer vision and deep learning accelerator powered by the Intel Movidius Myriad X VPU, that can stick into a Pi for less than $70.00.

But for truly distributed processing, the Apple A13 chipset in the iPhone 11 has a few features that boggle the mind: From Inside Apple's A13 Bionic system-on-chip Neural Engine, a custom block of silicon separate from the CPU and GPU, focused on accelerating Machine Learning computations. The CPU has a set of "machine learning accelerators" that perform matrix multiplication operations up to six times faster than the CPU alone. It's not clear how exactly this hardware is accessed, but for tasks like machine learning (ML) that use lots of matrix operations, the CPU is a powerhouse. Note that this matrix multiplication hardware is part of the CPU cores and separate from the Neural Engine hardware.

This should beg the question, "Why would a smartphone have neural net and machine learning capabilities, and does that have anything to do with the data transmission problem for the edge?" A few years ago, I thought the idea wasn't feasible, but the capability of distributed devices has accelerated. How far-fetched is this?

Let's roll the clock back thirty years. The finance department of a large diversified organization would prepare in the fall a package of spreadsheets for every part of the organization that had budget authority. The sheets would start with low-level detail, official assumptions, etc. until they all rolled up to a small number of summary sheets that were submitted headquarters. This was a terrible, cumbersome way of doing things, but it does, in a way, presage the concept of federated learning.

Another idea that vanished is Push Technology that shared the same network load as centralizing sensor data, just in the opposite direction. About twenty-five years, when everyone had a networked PC on their desk, the PointCast Network used push technology. Still, it did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. If Federated Learning works, those problems have to be addressed

Though this estimate changes every day, there are 3 billion smartphones in the world and 7 billion connected devices.You can almost hear the buzz in the air of all of that data that is always flying around. The canonical image of ML is that all of that data needs to find a home somewhere so that algorithms can crunch through it to yield insights. There are a few problems with this, especially if the data is coming from personal devices, such as smartphones, Fitbit's, even smart homes.

Moving highly personal data across the network raises privacy issues. It is also costly to centralize this data at scale. Storage in the cloud is asymptotically approaching zero in cost, but the transmission costs are not. That includes both local WiFi from the devices (or even cellular) and the long-distance transmission from the local collectors to the central repository. This s all very expensive at this scale.

Suppose, large-scale AI training could be done on each device, bringing the algorithm to the data, rather than vice-versa? It would be possible for each device to contribute to a broader application while not having to send their data over the network. This idea has become respectable enough that it has a name - Federated Learning.

Jumping ahead, there is no controversy that training a network without compromising device performance and user experience, or compressing a model and resorting to a lower accuracy are not alternatives. In Federated Learning: The Future of Distributed Machine Learning:

To train a machine learning model, traditional machine learning adopts a centralized approach that requires the training data to be aggregated on a single machine or in a datacenter. This is practically what giant AI companies such as Google, Facebook, and Amazon have been doing over the years. This centralized training approach, however, is privacy-intrusive, especially for mobile phone usersTo train or obtain a better machine learning model under such a centralized training approach, mobile phone users have to trade their privacy by sending their personal data stored inside phones to the clouds owned by the AI companies.

The federated learning approach decentralizes training across mobile phones dispersed across geography. The presumption is that they collaboratively develop machine learning while keeping their personal data on their phones. For example, building a general-purpose recommendation engine for music listeners. While the personal data and personal information are retained on the phone, I am not at all comfortable that data contained in the result sent to the collector cannot be reverse-engineered - and I havent heard a convincing argument to the contrary.

Here is how it works. A computing group, for example, is a collection of mobile devices that have opted to be part of a large scale AI program. The device is "pushed" a model and executes it locally and learns as the model processes the data. There are some alternatives to this. Homogeneous models imply that every device is working with the same schema of data. Alternatively, there are heterogeneous models where harmonization of the data happens in the cloud.

Here are some questions in my mind.

Here is the fuzzy part: federated learning sends the results of the learning as well as some operational detail such as model parameters and corresponding weights back to the cloud. How does it do that and preserve your privacy and not clog up your network? The answer is that the results are a fraction of the data, and since the data itself is not more than a few Gb, that seems plausible. The results sent to the cloud can be encrypted with, for example, homomorphic encryption (HE). An alternative is to send the data as a tensor, which is not encrypted because it is not understandable by anything but the algorithm. The update is then aggregated with other user updates to improve the shared model. Most importantly, all the training data remains on the user's devices.

In CDO Review, The Future of AI. May Be In Federated Learning:

Federated Learning allows for faster deployment and testing of smarter models, lower latency, and less power consumption, all while ensuring privacy. Also, in addition to providing an update to the shared model, the improved (local) model on your phone can be used immediately, powering experiences personalized by the way you use your phone.

There is a lot more to say about this. The privacy claims are a little hard to believe. When an algorithm is pushed to your phone, it is easy to imagine how this can backfire. Even the tensor representation can create a problem. Indirect reference to real data may be secure, but patterns across an extensive collection can surely emerge.

Here is the original post:
Federated machine learning is coming - here's the questions we should be asking - Diginomica

Looking for an impressive salary hike? Power up your career with upGrads Machine Learning and Cloud prog – Times of India

In the last two decades, Artificial Intelligence has steadily made its way into versatile industry applications. This has helped businesses reap major rewards by reducing operational costs, triggering efficiency, boosting revenue and improving the overall customer experience. With a constantly evolving range of technologies, efforts are on to develop AI to a stage where it reduces human intervention to the minimum. This is where the relevance of Machine Learning and Cloud comes in. As businesses transform the way in which they communicate, work and grow, the importance of Cloud in deploying Machine Learning models becomes important. Because of the massive storage and processing of data, Machine Learning often involves the application of computational power to train models because of the lack of strong machines. Thus, when Cloud is paired with ML models, it forms the Intelligent Cloud that becomes a suitable destination for any companys Machine Learning projects. The Cloud will enable ML data to make more accurate predictions and analyze data more efficiently, enhancing business value by a huge extent. With so many developments, any study of Machine Learning is incomplete without learning about its association with the Cloud.To help working professionals become a part of any companys end-to-end packaged ML solution, IIT Madras in collaboration with upGrad, has launched the ML and Cloud program. As one of Indias largest online education platforms, it recognizes the huge potential of taking Machine Learning to the Cloud, and how the first step to enable this is to train ML professionals in the right direction. Lets take a look at how upGrads ML program in Cloud will help professionals skill up in the foreseeable future.'; var randomNumber = Math.random(); var isIndia = (window.geoinfo && window.geoinfo.CountryCode === 'IN') && (window.location.href.indexOf('outsideindia') === -1 ); console.log(isIndia && randomNumber A revolutionary advanced certification course in Machine Learning and CloudIn the current business set up, data and insights can be termed as the true currency for business operation. This is why every organization is immensely scaling up its ML capabilities. upGrads Advanced Certification in Machine Learning and Cloud is helping learners become Machine Learning experts by training them to deploy machine learning models using PySpark on Cloud. This prestigious certification provides students with the opportunity to learn from a set of experienced Machine Learning faculty and industry leaders. Another highlight of this 9-month program is that it has about 300+ hiring partners, ensuring that professionals who choose to upskill with this course ends up in the industry of their choice.

The Advanced Certification in Machine Learning and Cloud by upGrad seeks to build employability of professionals and also boost up their annual packages. The requirement for ML professionals has now percolated to multiple industry domains like e-commerce, retail, healthcare, banking, manufacturing, transport, NBFC, and finance, among others. The course offers an equal opportunity to every learner, enhancing their relevance in the company they will work for. In Data and ML related hirings, recruiters look for people who are proficient and knowledgeable and can prove to be assets to employers in the company. This certification program by upGrad is an excellent opportunity to make a credible career transition. Considering ML is one of the fastest-growing fields in Data today, Machine Learning engineers are getting hired at astounding pay packages. In fact, an Indeed survey revealed that there has been more than a 300% spike in ML hirings since 2015. Considering this shift, upGrads Advanced Program in Machine Learning and Cloud is the best way to flag off ones ML journey.

Top skills that the program will offer Programming: Learners will be working in core and necessary languages like Python and SQL since the former is required for ML and the latter for the Cloud.

Machine learning concepts: The program is set to offer a holistic understanding of both basic and advanced subjects within ML. This includes the application of the appropriate ML algorithm to categorize unknown data or make predictions about it. Also included is the ability to modify and craft algorithms of your own should and when the need arises.

Foundations of Cloud and Hadoop: It also included knowledge about Hadoop, Hive, and HDFS along with the implementation of ML algorithms in the cloud on Spark/ PySpark (AWS/ Azure/ GCP). Overall, the curriculum is designed so that students learn the local Python implementation as well as the cloud PySpark implementation of classical machine learning algorithms.

Who should apply for this program?Keeping the overall market landscape in mind, this program by upGrad is ideal for the following categories:

The pedagogy and content of upGrads Advanced Program in Machine Learning and Cloud is a perfect integration of online lectures, offline engagement, practical case studies, and interactive networking sessions. The platform provides full support to young professionals in their ML journey by also catering to the needs of employers by training the future workforce in all data-related aspects. Whether it is resume feedback, mock interview sessions with industry experts, or conducting placement drives with top-notch companies, upGrad has provided it all to its learners. Many of these learners have also been placed at companies like KPMG, Uber, Big Basket, Bain & Co, Pwc, Zivame, Fractal Analytics, Microsoft etc. with impressive salary shifts.

See original here:
Looking for an impressive salary hike? Power up your career with upGrads Machine Learning and Cloud prog - Times of India