8 Best Blockchain & Cryptocurrency Books To Read in 2019 …

Cryptocurrency books are a wonderful way to learn about the exciting Bitcoin, Altcoin, and Blockchain world.

Due to the relatively young age of the cryptocurrency space, there arent that many cryptocurrency books available yet.

However, as with most things in life, quality is more important than quantity.

The cryptocurrency space is an industry that has attracted a lot of intellect, and this is clearly reflected in the quality of books that have been written on the topic.

In this article, we compare the best cryptocurrency books to help you find a cryptocurrency or blockchain book you will find interesting.

With over 130 reviews on Amazon.com, out of which 88% give the book 5 stars, The Bitcoin Standard is by far one of the best cryptocurrency books out there. The book is authored by Saifedean Ammous, a very vocal Bitcoin maximalist, and economist.

In The Bitcoin Standard, Saifedean covers the evolution of money, dives deep into what makes Hard money and why its so important and outlines a potential future with Bitcoin as the global reserve currency.

Written by Andreas Antonopoulos, a Bitcoin educator and well-known figure in the space, Mastering Bitcoin is a must-read for people that already grasp the basics of Bitcoin and want to dive deeper.

This cryptocurrency book teaches its readers how exactly Bitcoins infrastructure functions, the role of cryptography in Bitcoin, and also gets into some technical details of how programmers can develop a Bitcoin-like cryptocurrency.

Andreas has a very unique and comprehensive writing style that elegantly reflects his years of experience as an educator and public speaker.

The Internet of Money is Andreas Antonopoulos second cryptocurrency book and aims to lay out a future with Bitcoin as money in a much more non-technical way. This cryptocurrency book takes a top-level approach on explaining what Bitcoin is, and how it will change our lives. Again, as we already pointed out in the previous book, Andreas years of experience as a Bitcoin educator make his writing style very enjoyable to read.

The internet of money has over 281 reviews on Amazon.com, which puts it right at the top next to The Bitcoin Standard as one of the best cryptocurrency books.

If your goal is to get a general overview of the cryptocurrency space then the blockchain book Bitcoin and Cryptocurrency Technologies might be for you. Authored by Arvind Narayana, an assistant professor at Princeton, the book dives into the origin of cryptocurrencies, key terms like decentralization and privacy, and also the value proposition and risk of altcoins.

In essence, the book aims to give a solid overview of what is often described today with the slightly overused word Blockchain technology. That being said, its important to note that the book does not address Ethereum or programmable blockchains in any real way.

In conclusion, the book is written in a very comprehensive style and is especially a good fit for newcomers to the cryptocurrency space that want to fall down the crypto rabbit hole.

On spot 5 of our list of best cryptocurrency books, we have The Business Blockchain, an in-depth analysis of how Blockchain technology is poised to disrupt enterprise and how firms operate. The author William Mougayar predicts a future with thousands of blockchains that will redefine power and governance by enabling frictionless value exchange and also a new flow of value.

The foreword by Vitalik Buterin adds a very interesting touch to the book and is the sealing stamp for this written masterpiece. Although the book does briefly touch on the tech side of things, that does not stop it from being an excellent read for people that are not very familiar with Blockchain just yet.

The Age of Cryptocurrency was written by Wall Street Journal reporters Paul Vigna and Michael Casey back in 2015, however, it is still highly relevant today. This cryptocurrency book thoroughly answers the question of why anyone should care about Bitcoin. It achieves this by presenting the idea of a financial system that is running on Bitcoin, and how such financial system could have prevented the economic meltdown of 2008.

That being said, the book also balances the potential downsides of a Bitcoin-based financial system, mentioning the facilitation of illicit money transfer as the main factor.

Although many cryptocurrency books do a great job at how a future with cryptocurrencies could look like, they often neglect the investment and entrepreneurial opportunity of this historic wealth transfer.

Authored by Chris Burniske and Jack Tatar, the book Cryptoassets approaches the topic of Bitcoin and cryptocurrencies from an investment perspective, showing investors what to be on the lookout when investing in this wild asset class.

Furthermore, Cryptoassets also teaches investors how to navigate in a market which very nature is based on a series of repeating bubbles, plagued of scams, and that is highly volatile.

Cryptocurrency Investing Bible aims to debunk some of the most common misconceptions about Bitcoin and cryptocurrencies in general. The book answers questions like why cryptocurrencies are not a bubble, why all digital assets are not a scam, why cryptocurrencies are not only used by criminals, and why its not just money for nerds.

This book is an excellent starting point for newcomers to the cryptocurrency space, and the author Alan T. Norman does an excellent job at breaking down even the most complex concepts into easy to grasp terms.

Selected as Financial Times book of the month, Life After Google is a written masterpiece that dives into the societal changes that come with the rise of Blockchain technology, and what this means for large corporations that have been abusing our data for the past decade. In his book, George Gilder claims that the age of Google is coming to an end and that the Blockchain economy will bring the power back to the individual in a variety of ways.

In the words of Peter Thiel, founder of PayPal, Googles algorithms assume the worlds future is nothing more than the next moment in a random process. George Gilder shows how deep this assumption goes.

In Blockchain for Dummies, Tiana Laurence Founder of Factom, explains Blockchain technology and its potential impact in a very simple way. This Blockchain book is an excellent starting point for people that know absolutely nothing about Blockchain technology, and that want to get their feet wet with a good read.

The book also covers how businesses can become more efficient by adopting Blockchain technology, so if you are a business owner, then that might be another reason why you could find this book intriguing.

Digital Gold covers Bitcoins value proposition as the best store of value and form of exchange that humanity has ever created. Some other topics that are discussed in-depth in this bitcoin book are the origin of Bitcoin and its mysterious founder, the Silk Road dark web marketplace and why it was such a crucial step for bitcoin, and also in Bitcoins first black swan event unchained by the Mt Gox hack. This book is an excellent choice for readers that want to learn about the past, present, and future of Bitcoin, without diving too deep into technical details.

If you want to learn more about this topic, then you should definitely also check out our article on why Bitcoin is the new Gold.

American Kingpin is a cryptocurrency book that describes the fascinating story of the dark web marketplace Silk Road, and how it connects to the development of Bitcoin. The book describes that it was the first large-scale application of Bitcoin as a form of exchange and that it was the factor that truly let the genie out of the bottle.

In this book, Nick Bilton describes the story of Ross Ulbricht, a 26-year-old libertarian, from when he started the Silk Road, until his arrest by the FBI in 2014. The book perfectly describes the true challenge that it was for law enforcement to shut down the Silk Road due to it leveraging an unseizable store of value, Bitcoin.

With over 400 positive reviews on Amazon.com, this book absolutely deserves its spot as one of the best cryptocurrency books available at the moment.

Mastering Ethereum is another masterpiece by Andreas Antonopoulos in which he dives into the technicals of how to build your first smart contract, what decentralized autonomous organizations are, and how Ethereum might redefine the future of Governance.

The book also dives into why multi-billion organizations like IBM and NASDAQ are starting to get interested in this groundbreaking technology, and what the future holds for Ethereum and its native currency ETH.

This blockchain book can get quite technical at times, hence why we would only recommend it to people that are very familiar with the space or that have a technical background.

In The Scandal of Money, George Gilder describes how broken our current financial system is and calls out the blatant corruption that is going on. Especially, the cryptocurrency book describes how our monetary system was actively designed to make the elite richer, at the cost of the middle and low class. If your goal is to better understand the problem that Bitcoin and cryptocurrencies, in general, are trying to solve, then this cryptocurrency book is for you.

Through a compilation of emails, forum posts, and comments, The Book of Satoshi gives us a profound insight into the twisted yet genius mentality of Bitcoins anonymous founder. Although the book was published back in 2014, the bitcoin book is still as up to date as it can get since there are absolutely no traces or updates from Satoshi since he disappeared back in April 2011.

The book also profoundly dives into the economics of Bitcoin, and its potential future outlook.

This book by Phil Champagne is an excellent resource for anyone that wants to learn more about the mysterious Satoshi Nakamoto or about the implications of Bitcoin in our society.

Is there a great blockchain or cryptocurrency book that we have missed in this article? Let us know in the comment section below!

CoinDiligent Staff Writer

Link:
8 Best Blockchain & Cryptocurrency Books To Read in 2019 ...

India are planning to release their own cryptocurrency – FXStreet

India have been refusing to make cryptocurrencies legal and the court case against the Reserve Bank of India has been delayed once again.

The Reserve Bank of India have banned the public from using cryptocurrencies like Bitcoin and maybe we have just found out why.

According to reports on Friday afternoon the central bank are planning to release their own. Shaktikanta Das thegovernor of RBI, noted that the issuance of the cryptocurrency is coming as a sovereign mandate.

Meanwhile, the ban on cryptocurrencies remains in place until the court release another date. Two months ago,the Pune City police in India reportedlyseized 85 million Rupees (USD 1.2 million) realized from the sale of244 confiscated bitcoinsdue to RBIs regulation on cryptocurrencies.

Also, there have been many other cases of fraud and improper business activity related to digital currencies in the nation.

It was also said that any launch of an Indian cryptocurrency will not be handed over to a private company, as there is a huge challenge around money laundering.

View post:
India are planning to release their own cryptocurrency - FXStreet

A Guide on How to Identify Cryptocurrency and ICO Scams – CryptoNewsZ

It will not be wrong to say that ICOs or the Initial Coin Offerings form the backbone of the cryptocurrency arena. Despite facing the slump in 2018, the digital currency space failed to lose its charm and dominance in the market space. More than a dozen new cryptocurrency projects are introduced for the investors every month. These strategic launches of virtual tokens and coins bring along with them a lucrative suite of Initial Coin Offerings. With a view to double their profits and reimburse the losses faced during the market crash, the investors bank upon these newly launched ICOs, which purposefully satiate their greed for returns in the best possible manner. Apart from experienced investors, ICOs also attract first-time investors in large numbers.

Interestingly, these large-scale ICO offerings put the investors in trouble as they open doors for the entry of fraudulent projects and scammers. These malicious offerings induce the customers and compel them to play their bets on their illicit tokens and coins. It remains quite a tiring task to identify between the genuine and fake ICOs present in the marketspace in disguise. The fast-emerging crypto industry has made it a fierce shot for even the most experienced players to keep pace with the new innovations. And this has made it cumbersome for the investors to examine the new crypto projects and threat-prone ICOs critically.

To give a better understanding of the subject, this informative article is here at your rescue, and it will help the readers to get a thorough knowledge of How to Identify Cryptocurrency and ICO Scams. Have a look:

The administrative team and the developers working behind the cryptocurrency project have an essential role in ensuring the success of the ICOs. Scammers take advantage of this situation to backstab investors in a well-planned manner. They falsely invent non-existent founders of the concerned project and create a legitimate-looking biography of them on the internet. Some fraudsters enlist the names of renowned crypto-developers on their plans to gain the trust of the investors.

It becomes of utmost importance for any investor to first conduct thorough research on the team members of the project before any investment. A scroll check on their social media accounts and LinkedIn profiles can clear the picture in a better way. The investors should also cross-check the followers of the listed developers and make sure that there is active participation from their side with the right number of followers. One can check the official profiles of the well-known crypto developers associated with the project to check if they have accepted such claims.

The investors should also keep track of the qualifications and experience of the developers involved with the project to avoid getting trapped in fraudulent ICOs.

The whitepaper of a crypto project gives crucial information like background, project roadmap, strategies, and implementation, etc. A deceptive project might lack the intricacies that are needed to be mentioned in the whitepaper of a project. The project which fails to present its whitepaper to the investors should be neglected at the first thought itself.

Some scammers make well-informed whitepaper to add the element of authenticity in their project. PlexCoin, a scam project dwindled more than $15 million before getting suspended by the U.S Securities and Exchange Commission in December.

An excellent whitepaper strives to attract the investors attention by convincing them about the success of the project. If a whitepaper fails to guarantee you with a substantial amount of trust on the project, then you should back out instantly.

While choosing an ICO project, one should consider the concepts, goals, and implementation strategies to be adopted by the project in the future. New projects often have useful and innovative ideas but fail to prepare the blueprint to bring their goals to reality. They face downfall in a few months making the investors lose their money. A good ICO not only has a profitable idea but also has goal-oriented plans to achieve them.

Transparency of ICO is also a significant point to look upon before making an investment. A good ICO project keeps its customers informed about the developments made in the project by publishing a progress report on their website or social media handles. On the other hand, fake ICOs will never give the complete details about the plan to their investors.

It remains an excellent choice to track down the opinions shared by other investors and non-investors about the crypto project. These views turn out to be right more than oftentimes. The projects which have negative comments should be strictly avoided. Follow the community supporting the project, and if it has a negative perspective towards the project, then one should avoid making an investment for sure. The right ICO project has an active community backing the project firmly.

The projects that have unrealistic goals often end up failing or getting declared as fraudulent. False claiming projects should never be entertained by the investors.

An increase in the ICO projects has led to the rise in the number of ICO advisors, which has eventually made room for the entry of fake people into the field. For example, the ratings given by the ICOBench site are false as they do not involve any scrutiny or research before making the information live on the platform. The site has been engaged in payment-for-rating scams as they endorsed a few fraudulent projects like Veio.

Hence, it is imperative for an investor to beware about the projects they are putting their funds into. By keeping patience and a sense of diligent approach, one can easily make suitable investments and earn good profits.

Read the original:
A Guide on How to Identify Cryptocurrency and ICO Scams - CryptoNewsZ

The open source decade, fueled by cloud and GitHub – TechRepublic

Commentary: The last decade has been open source's most productive by far. Find out why Matt Asay considers it a Cambrian explosion of choice and innovation.

Image: uriz, Getty Images/iStockphoto

If the 2000s were the years when open source battled for survival with old world hegemonies, the 2010s was the decade when open source "won" and began to drive most every modern technological innovation. From cloud to mobile to big data to data science, open source has been at the heart of these and other mega trends since 2010 and, as such, has encouraged contributions from even its most stalwart foes.

On that note, let's look at the most important open source stories of the last decade, starting with the place where much (though not all) open source lives: GitHub.

SEE:More from our Decade in Review series(TechRepublic on Flipboard)

"GitHub changed everything...Nothing else [comes] close [in importance]," declared Red Hat's Andrew Shafer. Git, of course, has been with us since 2005, but GitHub, founded in 2008, made Git usable by the masses. Git wasn't the first version control system, and GitHub was not the first place open source code was kept (remember SourceForge, Google Code, etc.?), but GitHub steamrolled them all.

The secret of Git(Hub)? People.

As Cloud CMS founder Michael Uzquiano has stressed, "[T]he facility of pull requests via systems like GitHub...really delivered on the promise of code being open." Buried in Uzquiano's comment is the importance of the person on the other end of that pull request. Hazelcast's David Brimley takes this further, arguing that "fully integrated tooling like wikis, actions, CI/GitLab" enabled distributed open source teams to grow. In other words, version control, as important as it was, lacked the social aspect that GitHub offered. Open source became open collaboration, and that made all the difference.

It's therefore not surprising that the developer world held its breath when Microsoft announced in mid 2018 that it had acquired GitHub for $7.5 billion. In 2008, such a deal would have been unthinkable. Microsoft, for example, still hadn't donned its hair shirt for years of calling Linux a "cancer" and open source "un-American." In late 2009 I wroteon sister site CNET, "[Steve] Ballmer needs to learn to speak to developers or risks ruining the house that [Bill] Gates built." Microsoft looked likely to spend the next 10 years much like its last: Fighting the open source risk.

Instead, it changed. Almost completely.

From open source zero to open source hero, Microsoft has become the world's largest open source contributor (measured in terms of employees actively contributing to open source projects on GitHub). Partly this came down to a change in CEO, with Satya Nadella more developer-friendly than his predecessor, but much of it was simple self-interest: Microsoft was a developer-oriented platform company. If it wanted to remain a "going concern," it needed to be concerned with what developers wanted.

And they wanted open source. Oh, and cloud.

Cloud undergirds pretty much every open source trend of the past 10 years. (Disclosure: I have worked for AWS since August 2019.) Without cloud, there would be no GitHub, no modern CI/CD toolchains that have done so much to foster open source development, no dramatic rise in containers, etc. Just as open source gave developers an easy path to exceptional software without detouring through Purchasing or Legal so, too, did cloud enable developers to spin up the hardware necessary to run open source software for relatively little without waiting for IT to provision servers.

Cloud, in short, completes open source in ways that Tim O'Reilly anticipated back in 2008. It has enabled the Cambrian explosion of innovation in open source over the decade.

SEE:The most important cloud advances of the decade (TechRepublic)

Indeed, it was the cloud that really fueled the accelerated rise of open source, even as open source gave rise to cloud. Yet one of the biggest stories of the decade was the sometimes uneasy alliance between cloud and open source. As I wrote in 2018, commercial open source vendors sought to block cloud vendors from distributing their open source code, experimenting with a number of license changes, even as they tell their investors (see here and here), "We haven't seen [cloud competition] really affect any of our metrics, when it comes to downloads, community adoption, or...our sales numbers." As we leave the decade, there are faint signs of a thaw.

Against this backdrop of cloud as the infrastructure enabler and GitHub as the locus for development, so many cool things have happened with open source since 2010.

As important as the back-end infrastructure development (e.g., Docker revolutionized application development through containers yet ultimately the company failed to profit therefrom), front-end development for mobile and web exploded. Within the enterprise set, we may like to fixate on Kubernetes and containers, but open source front-end development technologies like Angular and React touch far more developers, as AWS' Ian Massingham has pointed out:

Kubernetes: 60.2K stars (43.6K repos on search term)

Vue: 152K stars (324K repos)

React: 140K stars (1M+ repos)

Node.js: 65.8K stars (746K repos)

Angular: 54.3K stars (672K repos)

Perhaps ironically, one of the key stories here is just how much of a "brutal, feral space" JavaScript frameworks remained throughout the decade, as Diffblue CEO Mathew Lodge suggested. Whenever it seemed that React or Angular or something else was going to claim top honors, a new JavaScript framework emerged to challenge it. At the same time, every new framework or programming language had to become open source or fail. Even Apple, which sometimes eschewed open source, eventually decided to release its Swift language as open source.

SEE: Java and JavaScript dominated software development in the 2010s (TechRepublic)

The same is true of the exploding data infrastructure world. Apache Hadoop was all the rage and then gave way to Apache Spark, which gave way to...the list goes on. Indeed, the pace of innovation within data science has been so pronounced that it has become almost pointless learning how to pronounce the names of new open source data infrastructure projects as they have their 15 minutes of fame. RedMonk analyst James Governor argued that we were entering the polyglot era of software development, and the decade confirmed that view at every turn.

Especially databases. While the world spent decades storing data in (mostly) relational databases (RDBMS), developed by a few enterprise IT vendors, in late 2009 the launch of MongoDB sparked significant changes in how developers viewed their database options. Instead of relying on the RDBMS to manage increasingly "big data," with its unprecedented variety, volume, and velocity, developers embraced an array of so-called (and almost entirely open source) NoSQL databases, including document databases, key-value stores, graph databases, time series databases, and more.

SEE:How to build a successful developer career (free PDF)(TechRepublic)

Even as developers exulted in this smorgasbord of choice, RDBMS PostgreSQL started its own resurgence. PostgreSQL never attained quite the status of its open source sibling, MySQL, yet over the decade PostgreSQL grew to become the fourth-most popular database, according to DB-Engines. PostgreSQL became hot in the past decade, yet remains the unsung hero of data.

Which is a good place to end. Most of the decade's hottest open source technologies, and the stories that accompanied them, were all about change. PostgreSQL, by contrast, demonstrates one of the other wonderful things about open source: How projects can evolve to meet new use cases. Linux has demonstrated this with operating systems, and PostgreSQL is doing the same in databases. From 2010 until 2020 the explosion of new open source choices is mind-boggling, yet the persistence of PostgreSQL is comforting, reminding us that open source can be whatever we need it to be.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

View post:
The open source decade, fueled by cloud and GitHub - TechRepublic

Open source developers say GitHub must terminate its contract with ICE — or else. – Los Angeles Times

Since at least September, employees of GitHub have been pressuring the Microsoft-owned code repository to terminate its contract with Immigration and Customs Enforcement, without success. Now theyre getting reinforcements from a constituency that could have more clout.

In an open letter published Wednesday on GitHub, software developers representing the open source community joined the call for GitHub to immediately cancel the $200,000 contract with ICE.

Open source is about inverting power structures and creating access and opportunities for everyone, the letter, signed by 167 developers at the time of publication, reads. We, the undersigned, cannot see how to reconcile our ethics with GitHubs continued support of ICE. Moreover, your lack of transparency around the ethical standards you use for conducting business is also concerning to a community that is focused around doing everything out in the open.

Open source software is made up of source code that is free to be used, distributed and modified by anyone; examples include parts of the Firefox browser and the Ethereum blockchain. Although much of the code stored on GitHub is open source, the rest of it is often stored privately or available only for a licensing fee.

Notably, the developers behind the letter stop short of threatening to boycott the platform, which plays an increasingly indispensable role in projects that require collaborating around code. Some say they now feel theyre stuck with a company they are no longer morally aligned with.

In airing their demands openly, the developers borrow a tactic that has worked in the past. Four years ago, hundreds of unsatisfied open source contributors put their names to a letter, titled Dear GitHub, criticizing the company for ignoring their requests for new features and fixes for broken ones for years. The company went above and beyond to remedy their issues, according to the newly published letter.

GitHub pays careful attention to its open source contributors, said Don Goodman-Wilson, who worked as a developer advocate at the company.

The Dear GitHub letter has been quite influential on the way that we approach product design, Goodman-Wilson, whose job entailed persuading people to use the companys open source services, said. We have teams that work specifically on features for open source developers. They dont pay for our software. Theres not money to be made in doing this, but we take it very seriously nonetheless.

On Monday, Goodman-Wilson tendered his resignation, saying he felt he could not ask developers to use the platform given GitHubs contract with ICE.

I am deeply concerned about the damage to my own reputation from defending GitHub, he wrote in a letter to his co-workers. Leadership has made clear to me personally that they will not change course.

His is the seventh resignation over the contract since October.

GitHub staffers have been agitating internally and publicly since the company renewed its contract with ICE in September. After employees published their demands at the beginning of October, the company said it would donate $500,000 to nonprofits that helped communities affected by the Trump administrations immigration policies. Chief Executive Nat Friedman also said that though he disagreed with the immigration policies ICE is enforcing, canceling the contract would not persuade the Trump administration to change them.

Friedmans statements failed to quell the dissent. As employees continued to challenge the relationship in meetings and other venues, GitHub Chief Operating Officer Erica Brescia said that barring ICE from access to GitHub could actually hurt the very people we all want to help, as The Times first reported.

In response to several requests for comments in the last two months the company has directed The Times to its original memo published in October. The company did not respond to questions about the developers letter or Goodman-Wilsons resignation.

Complicating matters is GitHubs ubiquity in the developer community and the difficulty of switching to another platform less popular with collaborators.

I think that knowledge that GitHub has that their platform is somewhat of a monopoly within this system, at least in terms of influence, is critical to the fact that they can be somewhat arrogant in the way that theyre responding to this, Tatiana Mac, a product designer and developer who signed the open letter, said.

Still, boycotting the platform remains an option, according to Mac and other signers such as David Heinemeier Hansson, the founder of Ruby on Rails, the programming language GitHub was initially built on. GitHub has almost innumerable benefits from the fact that theyre seen as the de facto place for open source hosting, Heinemeier Hansson said. But that can absolutely change. I think that itd be very foolish for them and for their owners to jeopardize that position.

But for those who have contributed and maintained code on GitHub for several years, Heinemeier Hansson conceded that it would be difficult to migrate all the repositories to another platform.

I hope it doesnt get to that, he said. If we can get GitHub to change their mind and change their actions and so forth, thats a far preferable outcome of this rather than just say, Well, were going to take our ball in and go home. But that threat needs to be there at all times.

In the meantime, some developers are finding subtler ways to subvert the company. Marcos Cceres, a software engineer at Mozilla who also co-chairs the World Wide Web Consortium, which maintains and develops open standards for the internet, said hes been encouraging paying users to suspend their subscriptions and use free services until the company changes its course.

Mat Marquis, an independent consultant, is a part of an experimental sponsorship program that helps open source developers solicit donations for their work creating free software. GitHub matches donations of as much as $5,000 for members of the program.

In protest of the contract, Marquis said hell be donating the same amount he receives in sponsorships to Beyond Bond & Legal Defense Fund, a Boston group that helps pay the bonds for people held in ICE detention centers.

Im angry and Im lashing out the way GitHub taught me to, Marquis said. GitHub, he said, feels inescapable as part of the code-writing process.

For a pittance in tech-money terms, and to appease their parent companys contract pursuits, GitHub has successfully turned its most impassioned advocates into users that are only stuck here for as long as it takes to find something better, he said.

And although hes not ready to call for an all-out boycott, Heinemeier Hansson said he would discourage new users from joining the platform until GitHub responds to the open letter.

Just see how this plays out, he said. I think anyone who is starting a product can afford to hold off for a couple of weeks.

See the article here:
Open source developers say GitHub must terminate its contract with ICE -- or else. - Los Angeles Times

Logz.io moves toward application observability in the cloud, raises questions on open source – ZDNet

Logz.io is an interesting vendor. Founded in 2014, it has managed to raise about $100 million and build a team of 250 people to date, on the premise of offering an open-source based solution for log management.

Today in AWS reInvent, Logz.io announced the latest addition to its portfolio: a cloud observability platform, that correlates metrics and logs to speed up investigative work and time to resolution. The platform is based on Grafana and is currently available in Beta.

ZDNet caught up with Logz.io CEO, Tomer Levy, to discuss the new platform, business strategy, and open source.

Logz.io calls its cloud observability platform the culmination of its three product offerings. The company's first product was Log Management. The idea was to provide theELK Stack (Elastic-Logstash-Kibana)as a fully managed service and enhance it with advanced data analytics features to increase developer productivity and decrease time to resolution.

From there, the company went on to offerCloud SIEM(Security Information and Event Management). Cloud SIEM aims to provide simple DevOps-native threat detection and analytics built solution, again built on the ELK Stack. In other words, Logz.io went from offering a platform, to offering a domain-specific solution on that platform.

Today's offering goes one step further in the platform direction. So far, Logz.io was able to ingest and offer services on data contained in logs. Infrastructure Monitoring adds metrics to the mix.

Levy's contention is that observability is layered on 3 pillars: monitoring, troubleshooting, and security. Logz.io took a progressive approach to addressing these pillars. It started with building the infrastructure to run ELK as a managed service. Then it added integration capabilities to ingest data from various sources, such as databases and servers.

Subsequently, the analytics capabilities were developed, including machine learning. Then, a SIEM solution was built on this foundation - essentially, sophisticated pattern matching, integrating data from various sources. Now, going beyond logs, application metrics are added too.

Logz.io says this enables complete visibility into Kubernetes and distributed cloud workloads. One of the features Levy emphasized was root cause analysis. Although Logz.io does not offer this as an out of the box functionality, Levy said it provides meaningful alerts to users, enabling them to correlate data across the board and investigate.

The new platform utilizes Grafana, which is interesting for several reasons. As engineering teams build and ship code faster, they employ technologies such as Kubernetes and serverless, resulting in application stacks that are distributed, abstracted, and difficult to monitor.

As a result, achieving observability in modern IT environments has become cumbersome, and time-consuming. To solve this issue, engineers prefer open-source tools, because they are accessible, easy to set up, community-driven, and purpose-built. Also, they are cloud-native and easy to integrate with modern infrastructure.

The above is a near-verbatim excerpt of Logz.io's press release, yet few people would disagree with that. Time and again we have referred to the fact that open source is becoming the new normal, also in enterprise software. Logz.io chose to build on ELK and Grafana not just because they were available, but also because it wanted to capitalize on them.

One of Logz.io's dashboards for application monitoring, built on Grafana

Levy said that people find Logz.io easy to use because it's built on platforms they are familiar with already. Solid platforms with vibrant communities. For us, this begged the question: Is Logz.io competing with Elastic and Grafana Labs, the vendors who build ELK and Grafana? Is Logz.io giving something back to those communities, or should it, and how?

As far as the competition part is concerned, Levy's answer was straightforward. He admitted there is some kind of competition, but his point of view is that both ELK and Grafana are horizontal, domain-agnostic solutions, and neither of those is enough in itself. By contrast, Levy said, Logz.io is a vertical, domain-specific, best-of-breed solution.

As far as giving back goes, Levy said Logz.io is (by its estimate) the number one contributor in ELK in terms of content. Levy said Logz.io people contribute in terms of things such as tutorials, documentation and the like.

Certainly, Logz.io is an interesting solution. The company shows healthy growth, and the platform makes sense. But the strategy also raises some questions. Many people, ourselves included, have elaborated on how cloud vendors taking open source software and offering it as a managed service are in effect competing with the vendors who build the software.

Is what Logz.io does much different? Elastic, for example, is also offering a SIEM solution, covered recently by fellow ZDNet contributor Tony Baer. Grafana Labs recently secured funding to develop a platform along the lines of the one Logz.io unveiled today. And it sounds like even the technical architecture partially overlaps, for example in using Jaeger, which Grafana includes in its "big tent" approach.

Grafana Labs is building a platform along the lines of what Logz.io unveiled today

Where does one draw the line between competing in the same market, or overlapping markets, and unfair competition? Is contributing content enough to make up for it all? Would Elastic and Grafana Labs, or others who may find themselves in their shoes, be justified to react by changing their licenses to prohibit what the Logz.io of the world do? What would happen then?

We don't have the answers to those questions. What we see emerging here, however, is a cause for concern. Some vendors focus on building open source infrastructure, while others take this infrastructure and provide value-add services on top of it. For them, the cost of building this infrastructure is an externality that enables them to compete efficiently.

There's nothing wrong with building value-add services. But what happens if the builders do not just skip giving back to the infrastructure, but also compete against it? Would Logz.io, for example, be happy to take over (part of) the engineering and R&D cost for ELK and Grafana, or share what is built on top of those? The answer seems to be "no".

The question is then, is this sustainable? If everyone does this, the infrastructure is either going to collapse or end up being appropriated. If this is not something we want to see, we need to talk about open source licensing and monetization.

ADDENDUM, December 4, 2019: Following the publication of the article, we received the following statement by logz.io CEO Tomer Levy:

"We'd love to contribute more to any part of these communities and we already do so. We have dozens of open-source repositories we contribute back to the community with. We're investing a significant part of logz.io engineering time in open-source projects such as Apollo, Sawmill and many other significant parts of the ELK open-source stack were developed by us. These are 100% open source" .

Read more:
Logz.io moves toward application observability in the cloud, raises questions on open source - ZDNet

How Open-Source Product Information Management is Bringing SMBs On a Level Playing Field with Big Tech Firms? – MarTech Series

The customer journey is no longer a linear path. The rise of Omnichannel as the defining retail strategy of the early 21st century has replaced the typical funnel-based buyer journey in favor of a multi-layered experience that moves across touchpoints. Todays enterprises are dealing with this shift by transforming their data management strategies and using smart insights to tailor a richer, more personalized customer experience (CX) across various channels open to todays customers. And through it all, customer expectations continue to soar.

Within this new paradigm, its easy to see how poor data management and a lack of a personalized content strategy can wreak havoc with your CX. This is especially true for small- and medium-sized businesses (SMBs), where the Omnichannel world can sometimes seem like a bewildering mix of costly integrations and security challenges. Ideally, the first step for any SMB looking to compete with larger enterprises is to invest in a Product Information Management (PIM) platform to support their CX goals. But many of them imagine an excellent PIM system to be bundled with extensive proprietary software costs, which makes it harder for them to keep pace with bigger, resource-rich organizations. Besides that, many proprietary PIM systems may not suit the needs of ingenious start-ups and small businesses that are working towards disrupting their own business processes with a view to attain greater efficiency and innovation.

Read More: Social Videos Big Business Has Room to Grow Even Bigger

And thats why open source PIM software is such a game-changer. For the struggling SMB, it offers a wealth of positives from easing costs to supporting a richer selection of integration-compatible products. Here, well explore how open source PIM can help smaller enterprises improve their CX, build a foundation for omnichannel and successfully compete in global markets.

Even today, many enterprises have a tangled web of processes and technologies, some of which are integrated across the organization while others remain siloed. The extensive configurability of open-source PIMs means that businesses can customize the software to fit their existing and future needs, instead of having to realign their operations around the PIM platform. Additionally, many open-source PIMs have single-source-of-truth data functions, which enable them to import data from multiple applications and data feeds. This functionality allows SMBs to support the many complex processes involved in omnichannel with accurate information and democratizes data access across the enterprise.

One of the primary benefits of using open source software is access to the underlying code. In the context of PIM, this access allows SMBs to tailor the platform to their specific business needs. It also offers a chance to create custom data flows and data capture points, which are essential when working with disparate online and offline channels, IoT hardware, and emerging technologies. In comparison, while proprietary PIM systems may come with a host of mature features, they rarely offer much by way of flexibility when it comes to configuration.

The API-based integration methodology of open source PIMs is a boon for SMBs given that they allow a great amount of latitude in how you connect your existing systems together. Specifically, open-source APIs help you customize the data connectors between front-end systems, product databases, ERPs, PoS systems, customer data storage, content management platforms, catalog building software and thousands of other internal and third-party applications. Furthermore, you can add any number of APIs to a virtually unlimited number of systems as long as you have the bandwidth to support them. Very few proprietary software, if any, offer you the same level of interoperability that an open-source PIM platform delivers.

Besides helping you create richer customer experiences and managing data, a great PIM system can help you automate hundreds of tedious tasks, freeing up your teams time to focus on building innovative customer experiences. From merchandising and catalog building to data structuring and work corroboration, your PIM platform can be configured to do it all. But as any tech-savvy business leader will tell you, automation should be done in stages. By using an open-source solution, youll be able to add automation by degrees and on your terms, building approval-based workflows and notifications into every process, instead of having to rely on pre-configured protocols.

Since theres no licensing fee for open-source PIM software deployment, there isnt any costs involved at that level. As far as the implementation, customization, and maintenance is concerned, it can be carried out either in-house or with the help of your preferred IT partner. For an SMB with relatively tight budgets, the total cost of ownership is a crucial factor in any IT implementation, and open source PIM systems help you achieve just that. Moreover, the ability to control your PIM-related operation expenditure (OpEx) also gives you more room to compete with larger technology firms and enterprises.

Read More: How to Use Chat Marketing for the Holidays

As detailed above, open-source PIM systems are an absolute blessing for SMBs looking to move toward an Omnichannel customer engagement strategy. This is especially true for organizations that have strong IT teams or partners because successfully customizing and implementing open source software often requires specialized IT knowledge. Luckily, even if you dont have the right skill sets on hand, you can simply outsource the implementation to the PIM system provider or a third party IT services firm. At the same time, you may also consider nurturing software development talent in-house to achieve the best results at optimal costs.

Read More: Data Over-Retention: Why You Dont Need It, and What to Do About It

Read the rest here:
How Open-Source Product Information Management is Bringing SMBs On a Level Playing Field with Big Tech Firms? - MarTech Series

Latest Tech News This Week: December 2, 2019 – December 6, 2019 – Toolbox

The week couldnt have been more interesting Googles legendary co-founders Larry Page and Sergey Brin officially withdrew from their operational roles and handed over their bag of worries to Google CEO Sundar Pichai. Their departure has a lot to do with antitrust probe technology companies are facing. In the open-source world, a major Python scientific achievement went relatively unreported, an event that underscores how mainstream medias attention is heavily focused on big technology firms. Amazon inches closer to its quantum computing goals by positioning the revolutionary technology as cloud-first. And in other news, Amazons Arm server chip has caused quite a stir in the market.

Here Are This Weeks Top 4 News:

Open Source - March of Open Source Software - Scikit-learn Core Developers Win Major Scientific Prize

Quantum Computing - Amazons Big Quantum Leap - Launches Braket, a Fully Managed Service

Tech Policy - And the Vilification of Big Tech Continues

Data Center - Amazon Moves Up the Chip Value Chain - Debuts New Data Center Chip

Bonus Question: Open Source is at the heart of AI innovation but it requires funding for sustainability

Image Source: Gael Varoquaux

The power of open source software as a building block for Artificial Intelligence (AI) cannot be denied. One of the most transformative technologies, AI has hugely benefited from open source projects. AI tools and platforms leaping out of R&D efforts are now widely applied across the enterprise landscape. Now, the machine learning landscape including frameworks, libraries and components has also benefited from open source innovation with a cadre of developers shaping new ideas. Data from Deloittes Open Source Companies indicates 71.7% individuals contributed to machine learning as compared to 28.3% organizations. In addition to that, 14.6% of projects in Machine Learning domain are written in Python. Scikit learn, a core Python library that provides easy to use implementations of standard machine learning algorithms for data science has 70.9% contributions from individuals. For long, open source projects tools and platforms have served as one of the best ways to on-ramp emerging technologies. For instance, TensorFlow, an end-to-end machine learning platform was developed by the Google Brain team in 2011 and released in 2015, is now leveraged by Intel, AMD, GE Healthcare, Twitter, eBay, Bloomberg and LinkedIn among others to power their core businesses. Last week, the co-creator and contributors of scikit-learn, a core data science tool were recognized with the Acadmie des Sciences Inria prize for transfer, for their contributions to the Scikit-Learn project.

Backstory: On December 1, Gal Varoquaux, co-creator of scikit-learn and Inria researcher blogged about the major scientific win which largely went unnoticed in mainstream tech media. A few days ago, Loc Estve, Alexandre Gramfort, Olivier Grisel, Bertrand Thirion, and myself received the Acadmie des Sciences Inria prize for transfer, for our contributions to the scikit-learn project. To put things simply, its quite a big deal to me, because I feel that it illustrates a change of culture in academia, he wrote.

Heres why it matters in the open source world the researchers focused on building software, rather than pushing their own publications in the project. They preferred Python over Matlab, language preferred for academic machine learning. They reached out to the wider community, even non-experts to align development priorities. Consequently, scikit-learn has now become an industry standard and is widely leveraged by a number of internet giants who use it for product recommendations, detect fraud and spam, and understand user buying behavior. For scikit-learns community and its 1500 contributors, it is a huge recognition of its impact on the wider tech ecosystem.

Big Picture: Open source projects serve as building blocks, especially in the broader machine learning ecosystem where tools and platforms are still developing, open source communities can accelerate development and help enterprises develop next-gen capabilities. In the realm of machine learning, Google and Microsoft have built considerable leadership in the space by democratizing ML by putting tools in the hands of the users. We have seen ML patterns deliver value in real-world situations. At a time when most companies are constrained by lack of talent and datasets, open source projects help make ML easier and faster for companies. But open source software requires investment and thats why tech companies should collaborate more actively to define development priorities and functionalities as per the industrys requirement.

Our Take: As Varoquaux describes it, open source software wields tremendous impact across disciplines. Take the case of IBM-owned Red Hat which has become the face of innovation by packaging Linux for enterprise users. While scikit-learns win represents great win for the ML community, moments like this also serve as a reminder to storied tech companies to interact closely and fund the open source community to build next-gen capabilities.

Note to execs - lend your ears to Varoquauxs thoughts.

Bonus thought: The race for quantum supremacy just got more intense with Amazon joining the league

Image Source: Twitter

Amazon means business. The everything-as-a-service company is flexing its muscles into quantum computing, a technology domain led by IBM, Google, Microsoft and D-Wave Systems, striving to make quantum computing a reality. While the technology is a decade away and is yet to mature, tech enterprises and even financial institutions are dabbling in quantum computing to enable breakthroughs across sectors. Life sciences firms are exploring the potential of this game-changing technology for accelerating drug discovery and personalized medicine. Theres huge dollar inflow in R&D in September IBM announced the opening of the IBM Quantum Computation Center in New York State while Google claimed quantum supremacy with their latest research paper, Quantum supremacy using a programmable superconducting processor, published in Nature in October. Microsoft advanced quantum software architecture by introducing Q, quantum-focused language. Amid this fast-paced development, how could cloud behemoth Amazon lag behind? Now, Amazon is playing to its strengths by making quantum computing hardware accessible through the cloud.

Backstory: On December 2, Amazon announced the ambitious Amazon Braket - a fully managed AWS service that allows scientists, researchers, and developers to access quantum technologies from different providers like Rigetti, IonQ, D-Wave through Braket. According to Charlie Bell, Senior Vice President, Utility Computing Services, AWS, quantum has the potential to be a cloud-first technology. With our Amazon Braket service and Amazon Quantum Solutions Lab, were making it easier for customers to gain experience using quantum computers and to work with experts from AWS and our partners to figure out how they can benefit from the technology, he said. In addition to that, the tech firm also announced the launch of AWS Center for Quantum Computing, set to play a pivotal role in forging academic and industry partnerships to advance the developments in quantum computing.

Big Picture: Quantum computing holds tremendous promise. Besides technology firms, semiconductor majors like Honeywell and Intel are running active quantum programs to speed up development across the quantum stack -- hardware, software architecture. Consulting majors Accenture and Boston Consulting Group (BCG) are ramping up their quantum consulting practice. Over the last two years, theres been an uptake of VC interest betting big on quantum hardware and software startups. In 2018, Deloitte predicted, it is one of the largest tech revenue opportunities with interest in quantum computing market comparable to supercomputers. The quantum computing market is expected to touch $780 million by 2025, a report by Quantum Computing Strategies, 2019 indicates. The market is heavily dominated by 40 firms, which includes startups, semiconductor players and tech heavyweights.

Our Take: With the need for more computational power going up, quantum computing delivers the promise of exponential speed-ups and can meet the needs for future applications across Industry 4.0, logistics, energy, telecommunications, manufacturing and more. While the promise seems real and industry giants are working to make research a reality, the key question is whether the technology can be commercialized within the time frame promised.

Bonus question: Are antitrust laws and regulations the way forward for large tech companies?

Image Source: Facebook

The biggest management change in the history of Big Tech legendary Google founders stepping away from Alphabet Inc has sparked a feverish debate on the oversized power technology firms hold and how they should be held accountable for falling short of their world-changing goals. The effects are being felt across Silicon Valley on December 2 Twitter updated its global privacy policy to help users understand how their data is leveraged by advertisers. The social media giant is also doubling down on the wave of fake news and synthetic media infiltrating the internet. Earlier this year, Cupertino-based giant Apple updated its privacy policy, simplifying it for users to understand. To enable more data portability, Facebook recently introduced a photo-sharing tool that will be globally rolled out in 2020. Dismantling companies has become the biggest mandate for policymakers and ethicists on both sides of the Atlantic. If the underlying motive is to limit the power of tech companies, it looks like the efforts of regulators are paying off. A spate of public controversies in the last week shows signs of tech firms breaking under pressure.

Backstory: The European Commission (EU) announced on Monday about opening an investigation into Google and Facebooks data collection process. As part of their investigation, EU regulators sent a questionnaire to both the companies to investigate their data practices and how they profit from it. At the heart of the probe is the role played by technology companies in distorting privacy, misinformation and limiting competition. A day later, Google founders stunned the world by announcing they are stepping down from their respective roles at Alphabet Inc and handing the management over to long-time Googler Sundar Pichai, who will now do the bulk of heavy-lifting at Alphabet. Some of Googles other public controversies include the rash firing of its four employees and the recent data sharing partnership with Ascension, which has a chain of 2,600 hospitals. Facebook is facing increased pressure to restrict political ads being hosted on the social media platform. Meanwhile, Amazons e-commerce business is being scrutinized by U.S. Federal Trade Commission.

Big Picture: There has been a swathe of unintended consequences of the outsized power wielded by technology companies. From disrupting democracy, breaching peoples privacy, to stifling competition, tech firms are no longer the do-gooders with a mission to change the world. Policymakers and regulators believe tech firms have become monopolistic companies with too much power. They are defiant, stifle innovation and need to be brought in line but heres the key point -- Big Techs big ideas are a reality today. Drones, driverless cars, and Loon (network of balloons that provide internet access) are changing the way the world operates. What Big Tech requires is strong leadership to address these myriad issues which have drawn Capitol Hills attention.

Weve made progress in important areas such as hate speech, child exploitation imagery, and posts about illegal drugs and firearm sales but we still have more to do. Well continue to invest in people and systems to combat harmful content on our platforms. Facebook (@facebook) November 13, 2019

Our Take: Big tech companies have a massive impact on society and have helped shape global economies. Big Tech has fuelled the jobs economy and ushered in a wave of digital transformation. Technological innovation is closely related to the nations progress. While toppling Big Tech has become the biggest agenda on both sides of the Atlantic, the question is are antitrust laws the best way to chasten tech behemoths.

Bonus question: What will happen when hyperscale cloud providers enter the data center business?

Image Source: Twitter

Internet giants changed the rules of the game by opening a door in the cloud market and growing their revenues rapidly. Now the top hyperscale cloud providers - Amazon, Microsoft , Google and Alibaba are making inroads in the data center market, heavily dominated by Intel and IBM. Cloud is now part of every organizations extended infrastructure, with enterprises tapping into AWS, Microsoft Azure and GCP for AI and machine learning capabilities to serve particular use cases. Cloud service providers (CSP) also ushered in a new paradigm in computing by pushing into custom silicon development. Google demonstrated technology leadership by building TPUs to speed up neural network computations behind the scenes. Amazon kicked off the hardware trend with the acquisition of Israeli startup Annapurna Labs in 2015. In 2018, the cloud leader announced Graviton processor to power its data center workloads, and now the company is launching the second generation chip, Reuters report indicates. Check out the details below.

Backstory: AWS introduced Graviton to reduce operating costs and improve performance in 2018. Reuters reported about the second-generation chip based on Arms Neoverse N1 technology which can deliver significant performance gains. On December 3, Jeff Barr, AWS Technical Evangelist gave the details on the new Graviton2 processor, built using a 7 nm (nanometer) manufacturing process and can deliver up to 7x the performance of the A1 instances.

Arms leadership in the mobile ecosystem is well-known with the company commanding more than 95% market share in smartphone processors. Their Neoverse infrastructure ecosystem is thriving thanks to strategic partners with AWS, Nvidia, Marvell and Fujitsu among others. In fact, Amazon partnership has helped Arm move up the computing value chain by competing with data center heavyweights Intel. Intel, which leads with 95% market share in data center CPU space already faces stiff competition from AMD. While industry analysts are hugely divided whether Arm chips can match Intels performances, based on the x86 architecture, Arm-Amazon general purpose processors could very well meet future needs and deliver more power-efficiency.

Coming Soon Graviton2-Powered General Purpose, Compute-Optimized, & Memory-Optimized EC2 Instances - https://t.co/qzivRrtmYL #awsreinvent pic.twitter.com/zcVCWeNLdb Jeff Barr (@jeffbarr) December 3, 2019

Big Picture: When digital natives turned cloud providers, they leveraged their technology leadership to build accelerators -- specialized chips like ASICs, FPGAs and GPUs to power new deep learning and machine learning workloads both at the edge and cloud data centers. While Nvidia started the GPU wave, the hyperscalers Amazon, Google, Microsoft built a new business model around workload-specific accelerators which goes beyond GPUs. With the adoption of AI accelerators soaring, we saw the market tilt towards CSPs for cloud, compute and even the networking building blocks. Data Center is the next battleground for hyperscale cloud providers to innovate where CSPs are striving to provide more choice to enterprise customers.

Our Take: If the goal of major cloud providers is to make a push into the data center market, CSPs are on the right track. It looks like a natural progression for CSPs who are now moving on from custom silicon development to building general purpose processors for the data center market. We believe CSPs are better-positioned to keep up with technology roadmaps as opposed to incumbents.

We drill down into the newsworthy content doing the rounds on Twitter. Heres what we drummed up this week - the fervor around Big Techs new role as surveillance giants gain ground has inspired a spate of companies to update their privacy laws and make privacy by design a key feature in their products and services. Enter Cliqz an independent search engine that can perhaps show everything thats wrong with Google. Is DuckDuckGo listening?

With 93% of the search market in Googles bag, independent search engines provide ground for diversity. Heres a message from Cliqz creators: lets choose privacy over convenience. Try the beta here!

What did you think of this weekly tech news roundup? Let us know on Twitter, Facebook, and LinkedIn. Wed love to hear from you!

Continued here:
Latest Tech News This Week: December 2, 2019 - December 6, 2019 - Toolbox

ACEINNA wins award for autonomous car navigation and guidance technologies – Robotics and Automation News

ACEINNA has received an AVT ACES Award for their industry leading OpenIMU technology platform from Autonomous Vehicle Technology Magazine for autonomous vehicle guidance and navigation.

AVT magazine awarded the ACES Award to ACEINNAs OpenIMU open-source software stack, spotlighting it as an affordable and easy to use method to develop navigation solutions for many types of autonomous vehicles.

AVT Magazine also recognized ACEINNA for developing innovative products and demonstrating ACEINNAs industry leadership.

Mike Horton, CTO of ACEINNA, says: Our professional-grade inertial measurement solution features a combination of the affordable OpenIMU300 hardware and open source software stack that simplifies the integration of automotive sensors, depending on customers specific application needs.

It has been designed to meet diverse end-user industry needs. ACEINNAs OpenIMU supports GPS / global navigation satellite system (GNSS) solutions for enabling precise navigation and self-localization at a lower cost than competing traditional solutions.

By using a robust, professional-grade, customizable open-source software stack and easy-to-integrate hardware, the OpenIMU platform simplifies and modernizes navigation system development.

OpenIMU includes thorough documentation and simulation making this environment a truly unique solution for advanced localization and navigation algorithms.

OpenIMU allows for the online simulation of inertial sensors directly through a web interface. In addition, users can leverage the software stack to code sensors to match their respective application needs and then target the hardware to run algorithms directly on their vehicles, machines, and systems.

OpenIMU is a very robust solution. Unlike competing IMUs that have a limited algorithm volume framework, the flexible OpenIMU supports an open-source code base that simplifies sensor integration, simulation, algorithm verification, new algorithm creation, and direct deployment on the OpenIMU300 hardware.

With one complete solution, ACEINNAs customers can affordably leverage a wide range of sensor technologies to meet their advanced autonomously driven navigation and self-localization needs, without compromising on accuracy.

Enabled by the inertial navigation software stack, this innovative, high-performance, and cost-effective IMU provides unmatched accuracy for a wide range of self-localization and navigation applications.

ACIENNA says that, unlike competing products, OpenIMU is universal in its applications and can be installed in all types of moving vehicles.

You might also like

Read more:
ACEINNA wins award for autonomous car navigation and guidance technologies - Robotics and Automation News

Linux and the first-time developer’s journey – New Electronics

Linux is clearly a popular solution. In fact, it has been reported that Linux is used on every supercomputer in the Top500 project. Thanks to its tie-in with Android, it is also one of the most widely deployed general-purpose operating systems. Its open nature means that anyone can take Linux and configure it for use on a particular hardware platform, which has also made it popular for embedded applications.

While the level of adoption varies across industries, the underlying trend is that Linux is present across them all. In data centres the adoption is almost complete and, now, they are focussed on increasing the efficiency of Linux-based systems. In the medical market, the pace of adoption is once again picking up after some false-starts. In automotive and transportation, Linux is well established in the infotainment part of the market but is rapidly spreading to other areas. Of course, the consumer sector is very actively adopting Linux, and increasingly in resource-constrained devices such as the IoT and wearables.

There are also opportunities to use open source during the development of safety-critical systems. These systems need to undergo certification processes for whatever software they implement, and they need to do this each time new software is added. While Linux wont be the right fit for every application, especially for these types of systems, it very much can be helpful to use it as a starting point to quickly prototype and work out initial ideas.

The openness of Linux often creates forks in the road for developers, which effectively forces them to make an important decision at a very early stage in the project. They can choose to partner with a software company that offers an enterprise Linux distribution, often with their own hardware platform that they can support. Or they can start with a flexible distribution builder, such as the Yocto Project, develop their own hardware platform and manage the entire process themselves. An engineers instinct may well be to opt for the latter; it preserves choice, after all. Others may see the benefit in choosing a fully supported design flow, particularly if software development is outside the core strengths of the development team. Both have merits and, as we will discover, both have consequences.

You dont know what you dont know

One of those consequences is the cost. Open source software is, of course, available without any upfront cost and this can be a powerfully persuasive reason to use it. But teams will discover that most projects in general, and embedded projects in particular, will require the additional cost of maintenance and support.

It is very difficult, even for experienced teams, to accurately budget for the cost of maintaining and supporting a system based on technology that isnt developed in-house. That is one of the realities of using any open source technology, whether that is open source software or, as is also common today, open source hardware. More importantly, for inexperienced development teams, the implications of maintaining and supporting open source technology for their own product may not even feature in the technical requirement document, at least not the first time they go through the developer journey.

The reason for this hidden surcharge is simple, an open source project is always in development, an innovation engine that never stops. With many millions of lines of open source code, engineering teams need to decide when and if they are going to follow that development path, or freeze, branch off and maintain their own version of the distribution. In order to take advantage of the (typically security) patches that will be made available, they may need to follow the project.

Choosing a vendor-supported enterprise Linux can mitigate the impact of this, however it comes with some level of lock-in to that vendor: while it may still be possible to customize the underlying operating system, this may invalidate the support that comes with choosing that vendor.

The Yocto Project and You

Remaining portable yet compatible is an important aspect of open source Linux. The Yocto Project is a Linux distribution builder intended for embedded systems that is inherently customizable and portable between the leading instruction set architectures (ISAs) used in embedded devices today.

It is provided with the kernel, essential middleware and a tool chain. The target architecture can be selected at build, so the same distribution can be ported by the development team to any supported ISA; the decision doesnt need to be made until further down the development process. This allows other aspects of the design to influence the choice of most appropriate processor, rather than being saddled with the device provided as part of a supported package.

Just like Linux in general, the Yocto Project has become extremely popular, forming the basis of a software platform for embedded devices. This would comprise the kernel itself and the minimum amount of middleware necessary to support the application code. This so-called micro OS, or minimum OS, is where the Yocto Project excels today. The ability for Linux to support containers to compartmentalize other middleware is also an important aspect but one that can quickly lead to fragmentation of the underlying OS.

Getting started with the Yocto Project is incredibly accessible. The project is cloned from GitHub by downloading the latest branch. Then, using BitBake, which is the Yocto Project build system, an image can be made. Typically, the first time, this will be an image for an Intel target. This takes perhaps half a day and your experience is positive. This will be very encouraging for developers and perhaps all the positive reinforcement they need in order to settle on using the Yocto Project.

Based on this initial experience tasks can be allocated and a team put together. The estimate is it will require one person on the OS build, two developers on the User Interface, one on connectivity middleware, one more on data analysis algorithms and two on testing and usability.

From this project team it is clear that the majority of the effort is focused on what makes this product different, while the one thing that doesnt differentiate it the OS receives the least resources. For many manufacturers, this may be the most alluring reason to use open source software.

Reality Bytes

This scenario describes perfectly what the Yocto Project is intended to provide, a fast route to product development. What isnt clear from this initial experience is what the Yocto Project isnt intended to provide, and that is mitigating your risk of long-term stability. After all, like all open source projects, the Yocto Project is an innovation engine that never stops.

Looking purely at support costs, based on an average worldwide developers salary being around $100,000, maintaining the product for five years with the equivalence of one full time developer will be $500,000. The cost of development based on seven people for eight months is another $500,000, making the total project cost $1 million between development and maintenance. The cost of maintenance is equivalent to 50% of the project cost, and this is only a very optimistic estimate that more times than not proves to be wrong. Experience would suggest that when it comes to Linux projects in the embedded and IoT markets, the cost of maintenance sets in at 5x the initial cost of development over a five year period of time. In fact, with tens of millions of lines of code in the Yocto Project, and many developers working on it, it is inevitable that bugs will be introduced and fixed, making it a constantly changing platform. This means the cost of maintenance can rapidly overtake the cost of development; the budget allocated to differentiating the product is consumed by supporting the open source software, which may require some of the engineers working on the UI and connectivity to move over to OS support.

If the kernel has been modified, which is often the case with an embedded system, then the patches issued to fix generic issues may or may not fix the issues with a branched kernel. This compounds the support issue and contributes to the escalating cost of development and support. In addition, if the product needs to be certified for its end market, changing the kernel or any part of the software may invalidate that certification, meaning it becomes even more difficult to keep patching.

Another important point to consider is the legal requirement to supply licenses for any software that is shipped. This would fall on the shoulders of the development team or team leader and would cover any and all middleware. In a well supported project like Linux there is a huge library of middleware solutions to choose from, but not all will be viable for international markets and may be subject to trade embargoes.

The cost of support quickly escalates, and that is even before the product has shipped. Ongoing support will not necessarily be simpler, so the cost of maintenance and support is really an unknown for any manufacturer following the in-house route. Enterprise commercial Linux deployments exist to mitigate this but come with vendor tie-in and lack of flexibility, so is there a better way?

The Yocto Project... with a difference

For embedded applications, choosing a supported enterprise Linux distribution can come with vendor tie-in. If the distribution is oriented towards an Enterprise environment it is probable that any changes made to the distribution which are normally desirable for embedded projects, in order to strip away the unnecessary middleware will likely void the support package. Similarly, choosing a distribution that has been ported to a reference design could very well shackle your project to the same processor, which removes choice at a hardware level.

For good reasons, the Yocto Project has become a popular choice for embedded development. But as mentioned earlier, while choosing open source offers many technical benefits, any product development comes with other obligations that go far beyond the platform. For manufacturers targeting the industrial and medical sectors, products will need to be supported for between five and ten years after shipping, as a minimum. That support must cover both hardware (so continuity of supply of the processor is important) and software, which makes being able to patch the kernel equally important.

Wind River is a founding contributor to the Yocto Project, its Wind River Linux Yocto Project-compatible product is technically indistinguishable from the Yocto Project yet comes with full support that can be applied at any point in the design process. Changes to the kernel will not invalidate this support, and there is never a need to upgrade, because keeping a Wind River Linux subscription maintains a version of your specific distribution and there is effectively no vendor tie-in. It is on these points that commercial Linux distributions are really differentiated. In addition, every Wind River Linux build comes with its own Openchain Compliance Envelop, making it much simpler to regulate its distribution.

The developers journey when first using open source software often starts with a positive experience. However, the realities of product development do not disappear when choosing an open source platform. As discussed here, challenges can actually increase and, at least for the first-time developer, be somewhat of a shock. Choosing a robust commercial Linux partner like Wind River can make the difference between having a good developer experience or a bad one, between a project that fails and one that thrives.

Author: Davide Ricci, Director, EMEA Open Source Business, Wind River

Here is the original post:
Linux and the first-time developer's journey - New Electronics