Top 5 things to know about open source and the cloud – TechRepublic

Cloud software is impeding open source software companies from making a profit. Tom Merritt explains the five things you need to know about open source and the cloud.

Open source software has revolutionized how companies work, but cloud software like AWS has been making it harder for open source software companies to make money. When you can get cloud services based on open source software, there's no need to pay a company for the services around that software. Here are five things to know about open source and the cloud.

SEE: Deploying containers: Six critical steps (free PDF) (TechRepublic)

In the end, it's about how these companies and the open source community adapts to change. Redis Labs and MongoDB are both healthy, successful companies, but the motivation behind open source changed for them and they had to adapt. As cloud services continue to become huge, it may affect other systems as well. If nothing else, it's interesting to see that open source software principles seem to be holding on during what is a significant test for them.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

Image: iStockphoto/yuriz

View original post here:
Top 5 things to know about open source and the cloud - TechRepublic

Ockam raises $4.9 million in seed funding to make it easier for developers to secure and scale their IoT apps – TechCrunch

Ockam, a two-year-old, Bay Area-based company thats selling tools to developers so they can establish an architecture for trust within their connected device applications, has raised $4.9 million in seed funding, including from Core Ventures, Okta Ventures, SGH Capital, and Future Ventures.

This serverless platform for IoT development is being led by CEO Matthew Gregory and CTO Mrinal Wadhwa, two cofounders with noteworthy backgrounds.

Before launching Ockam in the fall of 2017, Gregory was an intrapreneur at Microsoft, where he says he helped lead Azures pivot into open source software and container services. He also spent a couple of years at Salesforce as a product manager and, interestingly, spent a few years years ago as a system engineer working for Stars & Stripes, a syndicate of the yacht-racing competition Americas Cup where he tells us he led an engineering effort to build custom systems of sensors, analytics software and wireless communications tools needed to help the racing team make better decisions.

Wadhwa was meanwhile the CTO of another privately held IoT company, Fybr, that promises real-time data analytics capable of decision making at the edge (versus in the cloud).

Some of what the startup is promising is that, using its technology, IoT systems developers will be able to build more scalable connected systems as well, crucially, as more secure ones How? Partly through crytpographic keys and partly by assigning credentials to different entities, from devices to people to assets to services.

The company is one of a growing spate of companies hoping developers will increasingly turn to them instead of building out their own software infrastructure.

For example,Particle, a seven-year-old, San Francisco-basedplatform forInternet of Things devices that has ambitions similar to those of Ockam, recently closed on $40 million in funding in a round that brought its total funding to $81 million).

Ockam raised its seed funding over two tranches, including a $3.2 million round that closed in May and an additional $1.7 million injection from Future Ventures in more recent weeks.

Here is the original post:
Ockam raises $4.9 million in seed funding to make it easier for developers to secure and scale their IoT apps - TechCrunch

Linux phones need to succeed and it isnt just about privacy – SlashGear

Android and iOS may be the mobile platforms today but there have always been attempts to push other horses into the race. Most of them used the Linux kernel just like Android but a few were more direct efforts to bring some of the Linux desktop stack to mobile in one form or another. Thanks to changes in the industry, particularly in electronic components and production, there has been a steady rise of such attempts to create true Linux and truly open source phones, with Purisms Librem 5 and PINE64s PinePhone leading the way. These are primarily targeted at a small hobbyist market and at users that value privacy and security above all else. But while those are valid and desirable goals, its actually important that these Linux phones become more mainstream in order to cultivate a healthier and better mobile market in general.

What is a Linux phone anyway? If you simply take the kernel into account, then Android can be considered at the very least a Linux-based phone. That definition definitely doesnt satisfy Linux users or even Google itself, mostly because Android doesnt fit the image of what the Linux operating system, not just the kernel, represents.

The Linux operating system, or some would say GNU/Linux, isnt simply defined by the Linux kernel or even a singular software feature. It is, instead, defined by an ecosystem of both software and the people that make the software that revolves around openness. In other words, open source software, open hardware, and open development. While its technically possible to run proprietary software on Linux, thats the exception, not the rule. A Linux phone, then, is one that runs primarily on open source software and promotes a culture of openness and collaboration, at least more than Android does, despite being open source.

That open source culture isnt just important for things like privacy, security, or even ethics. It is also crucial to pushing forward mobile technology that seems to have stagnated due to the markets most important element: profits. Android and iOS are developed by companies that, in the final analysis, are driven by the need for revenues. Any changes to these platforms are made with the end of selling phones and services primarily in mind. Somewhere along the way, they also become beholden to what paying customers want or what they believe consumers will want to pay for. In other words, they try to settle on whats safe and popular.

Thats not to say companies making Linux phones are running charities. Manufacturing phones require money and these organizations need to do a delicate balancing act between getting enough profit to keep the lights on without succumbing to corporate greed. But because they are not too tied to the prospect of making huge margins, they are able to play around with features and ideas you will never find on mainstream commercial phones.

Hardware privacy switches, the ability to install any Linux operating system of choice, or even modular, repairable phones are things that will make companies like Samsung go out of business, at least as far as making phones is concerned. Linux phones, in contrast, have the freedom and the capacity to play around with ideas and test them faster than any Android OEM would dare. They are rich seedbeds of innovation and experimentation that could push mobile technology forward.

The US ban on Huawei should be a wake-up call to both Android and iOS users and developers. It shows how these platforms are practically at the beck and call of one country. It also shows how the open source Android platform has become intricately tied to Googles proprietary software, though with good reason. Google Play Services offer features and capabilities that few are able to match. The challenge for the mobile market is to do exactly that.

Huaweis Harmony OS will try to do that but it will most likely fail. It will try to match Googles Android piece by piece and it will be subject to the same limitations and problems that an even more proprietary commercial platform will have to face. Linux users sometimes face the same compatibility problem with proprietary apps and services but they are not chained to those by nature. There are alternatives available and have the ability to make more when really needed.

Linux phones arent just about privacy and security. Theyre also about openness and experimentation. Just like what it did on desktops, these Linux mobile devices could challenge the status quo. They can introduce changes not only to the way mobile software is developed but also how manufacturers and assemblers do business. They can change the mobile landscape for the better, that is if fans and believers in a truly open ecosystem take the risk to invest in such a future.

Read more:
Linux phones need to succeed and it isnt just about privacy - SlashGear

Cloud Native Computing Foundation Reaches Over 100 Certified Kubernetes Vendors – Container Journal

More than 100 vendors now provide certified, conformant Kubernetes products

SAN DIEGO November 19, 2019 KubeCon + CloudNativeCon NA The Cloud Native Computing Foundation (CNCF), which builds sustainable ecosystems for cloud native software, today announced that it has surpassed 100 vendors with Certified Kubernetes products as part of the Certified Kubernetes Conformance Program. A certified vendor is an organization that provides a Kubernetes distribution, hosted platform, or installer.

CNCF runs the Certified Kubernetes Conformance Program to ensure that every vendors version of Kubernetes, or open source community version, remains conformant and supports the required APIs so users can rely on a seamless, stable experience. The program was created two years ago with 32 founding vendors. To become certified, vendors use an automated test suite to demonstrate conformance, which CNCF reviews and certifies via a public process.

Certification is essential because it provides consistency across different commercial and open source implementations of Kubernetes, said Dan Kohn, executive director of the Cloud Native Computing Foundation. Reaching this milestone of 100 vendors is indicative of the ubiquitous adoption of Kubernetes across cloud and enterprise software companies ranging from startups to the biggest technology vendors in the world.

To remain certified, vendors provide the latest version of Kubernetes at least yearly, ensuring that users have access to the latest features the community has worked hard to deliver. Any vendor is invited and encouraged to run the conformance test suite and submit for review and certification by CNCF. End users should make sure their vendor partners certify their Kubernetes offering and can confirm that certification using the same open source test suite.

To learn more about the program, see FAQs here.

Additional Resources

About Cloud Native Computing Foundation

Cloud native computing empowers organizations to build and run scalable applications with an open source software stack in public, private, and hybrid clouds. The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure, including Kubernetes, Prometheus, and Envoy. CNCF brings together the industrys top developers, end users, and vendors, and runs the largest open source developer conferences in the world. Supported by more than 500 members, including the worlds largest cloud computing and software companies, as well as over 200 innovative startups, CNCF is part of the nonprofit Linux Foundation. For more information, please visit http://www.cncf.io.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page. Linux is a registered trademark of Linus Torvalds.

Related

Read the original post:
Cloud Native Computing Foundation Reaches Over 100 Certified Kubernetes Vendors - Container Journal

GitHub Universe the elephant in the room for open source is called ‘going commercial’ – Diginomica

The cloud is a key component of any companys digital transition path. Most businesses now understand that coming to grips with the cloud model and how to exploit it is an inevitable, unavoidable step. It is perhaps less understood that a consequence of this is that they must also come face to face with using open source software, even if only indirectly.

Such an observation might at first promote a `so what? response, the what? in question being just how much the world of applications development has changed as part of that cloud transition process. But in practice the change is significant, for a couple of important reasons.

For example, even as recently as 10 years ago the majority of applications development work was built around single vendor, proprietary code of one sort or another. Open source code was very much on the periphery, providing a way of building some of the then bleeding edge applications that were starting to appear.

Now, of course, it is reckoned that there is hardly an application produced that does not contain at least some open source code in it, with many being complex amalgams of existing open source components and new code. Couple this with its particular prominence throughout the world of cloud-specific applications and services and it is easy to see the reliance that enterprise users are now putting on it.

The other main issue is that open source code is developed by a global community of developers rather than companies. And even if, as now, the majority of those developers do work for companies, they are still part of that community and its spirit and sense of direction. What happens in that community can be more important to enterprise users than might ever have been the case in days gone by.

This has put new pressures high up the enterprise CIOs check list. Two particularly important areas concern what licences are being used by every component of an open source application, not least because that can be a cause of legal problems and contentions, and whether some of the older open source components are now being properly maintained. A third issue now emerging is the possibility that the open source community may take a dislike to some aspects of their work and decide not to do it.

That last point is, at least in part, a reference to GitHub itself, and the subject did raise its head at last weeks GitHub Universe conference in San Francisco. The organisation not only acts as a repository of the worlds open source contributors, ranging from major software companies down to individual developers, but also acts as their distributor and sales agent. This is where the issue comes to a head.

The involves GitHubs contract with ICE, the US Government Agency, Immigration and Customs Enforcement. This is the agency which many in the USA feel is responsible for the separation of Mexican children from their parents at the US-Mexico border, and many including some in the open source community and some staff at GitHub itself. Employees have asked GitHub CEO, Nat Friedman, to cancel the contract, and one staff member has publicly resigned over the matter.

But rather than digress into the details of that issue, which has already been widely reported, it is worth considering the sea-change that is shaping up in the world of applications development. Whereas an employee of a software house had the option of leaving if they did not like what their employer was producing or the customers which bought, there was no other measure of control over matters than departing the fold. The all-important software licence was held by the employer.

With open source, however, that does not hold true. These days most software businesses are at least starting to build open source applications it makes sense in a cloud environment so they hold the licence for that application. But only up to a point.

The key advantage of open source is that developers can use code from a wide range of repositories held and maintained by GitHub, many of which are public. This makes application development much easier and quicker because lots of routine processes do not have to be `rewritten they are there in one of the repositories to be used. Indeed, GitHub makes its money from selling their use to commercial software developers.

But many of those code components have been developed by individuals or small teams of them and this is where problems can, and do, arise. It can be that the licence for a component does not allow for commercial use and this is getting to be far more likely as components once developed for, say, gaming purposes amongst groups of individuals for fun and entertainment, find their way into other commercial projects. The increasing use of both gaming and mobile app coding models in new cloud-native business applications means the use of existing code designed for that use makes a great deal of sense, but potentially opens up a large number of problems.

Similarly, it is these old open source code components that run the risk of being left unmaintained, yet unknowingly used in new applications that could, for example, leave business users at risk of finding themselves non-compliant with regulations covering their industry, or worse still with software failures that are very difficult to trace.

In a brief conversation with me, CEO Friedman did acknowledge that there was a growing risk that enterprises might feel increasingly threatened by the change in the balance of power and said it is something that GitHub is starting to address. In particular, he sees two of its most important announcements at the Universe Conference as directly addressing these problems.

There was, as might be expected, a goodly clutch of new products and services announced at the event, including a completely new environment for mobile applications that covers both Android and i/OS, both of which should be available early next year, and a re-engineering of the code repositories to make them more readily deliverable to users. The two key ones, particularly when it comes to helping enterprises manage their open source portfolios, are a new Sponsorship Scheme, and the new Code Vault.

The Sponsorship Scheme is the work of Project Manager Devon Zeugal, and is aimed at two audiences: one of them is the individual coder, where a person is felt by others (individuals or companies) to be making a contribution that is, for whatever reason, worthy of financial support to the work can continue. So now GitHub has engineered a service whereby the sponsorship monies can be managed and directed to that individual.

The same approach is being targeted at project management and it is this one that Friedman sees as being the tool through which user businesses can target those code components that are regularly used, but are no longer supported. It is hoped that the appearance of financial support to the code itself as a project will attract members of the community to provide on-going support into the future. This should prove significantly cheaper than a software house having to re-engineer the code itself to ensure on-going compliance.

The Archive idea has been developed by Director of Product Management, Kyle Daigle and Thomas Dohmke, Vice President of Special Projects. Its goal is to capture every bit of detail possible about every bit of open source code that has been written. This will include not only the source code but information about the developer(s), the modifications and updates and, of course, the licence information.

This is complementary to the work being done by the Software Heritage Foundation of the French Institute for Research in Computer Science and Automation, (INRIA), with which GutHub is collaborating on the project. One of its novel side issues is that it marks a new use for QR coding in that all the data about a code component is stored in that form on photographic film using a specially-prepared, long-lasting silver-oxide coating, according to Dohmke, it provides extremely high density, long-lasting storage.

It also provides commercial users with something of an audit trail for all the open source software components they are ever likely to use, together with information on the type of licence that applies. This could prove invaluable as open source code components become the backbone of just about every application being written. Access to code provide users with a high level of protection against a wide range of legal `gotchas.

All this comes back to the one issue that GitHub is currently not looking at, yet may have to at some time even if it will certainly raise some complex issues, especially when dealing with the hidden use of old, but perfectly workable code components in new applications. This is the question as to whether GitHub needs to address the commercial use question by developing some licensing structure of its own that specifically addresses the issue.

A discussion with Erica Brescia, GitHubs Chief Operating Officer, suggested that this not something the organisation is currently considering. In her view it is not GitHubs role to play in the ecosystem, and she doesnt see it being well received by developers if it were to prescribe the ways that developers should think about code licensing:

Now there are some things that we can do, like tell developers, that if they don't have any licence assigned their code, they might want to think about doing that. But I don't think we should be very prescriptive and how people think about that. We are at the centre in a way, but I think our role in the ecosystem is to educate, not to direct around licensing or anything else.

In her view, the problem does not occur that often, for with most licensing within bigger projects, when an individual or organisation contributes code to a project, there is usually a contributor licence agreement that gives the project the rights to that code moving forward. And so the project controls it and the contributor signs over their rights:

Now, there are cases where, if a company violates the licence terms of a particular piece of open source software, they can and have been successfully sued.

But the contentiousness of legality surrounding this area is only likely to get worse, especially where individual businesses then try to insert their own licences into the legal mix, especially when a competitor, say, addresses the same market requirement with a solution broadly based on the same open source code.

Companies with projects that they develop, have been looking at changing the licences to try to combat what they feel is kind of IP theft. But the perspective on that, for me is, if you put code out there under licence, you need to understand what people have the right to do with it and they're within their rights to build services on top of it.

Put simply, the terms of the several open source licences are geared towards protecting the interests of the contributors in ways that suit them. But they do not fit well with the needs of commercial software houses, especially when they have their own world of licensing to preserve and protect. There have already been legal incidents in the area, and it will quite likely get worse. It most certainly will not be easy, but there does seem to be a time coming when a new licensing structure for open source will be necessary, and GitHub, together with its contemporary/rival GitLab, would be well placed to develop, front up, and manage.

Here is the original post:
GitHub Universe the elephant in the room for open source is called 'going commercial' - Diginomica

People have noticed WeWork’s ‘sad’ empty booth at a big software developer conference – Business Insider

WeWork is in the process of massive layoffs this week as the company restructures after its failed attempt at an IPO.

When WeWork released the paperwork for the intended IPO, the company tried to position itself as a tech company rather than a real estate company. Since WeWork makes its money renting shared office space, not software, investors and pundits universally rejected that idea.

But WeWork did pursue all kinds of tech projects, under the auspices of now-ousted CEO Adam Neumann. The company had big hopes to offer its tenants various software apps, and at one pointsaid it employed 1,000 engineers, product designers and machine learning specialists. It even wrote about its technology choices from time to time on Medium.

That might explain why WeWork had a tiny booth at KubeCon, a big conference held in San Diego this week around Kubernetes, a Google-created open source software project that's become something of a standard in the cloud computing industry. The conference has an estimated attendance of 12,000.

WeWork was known to be a user of Kubernetes or, at least, it was interested in hiring people who knew how to use it. A recent WeWork engineering job posting listed Kubernetes as a "nice to have" skill, among a boatload of other trendy technologies.

Whether the purpose of the booth was to recruit developers, or to try and nab startups to rent WeWork office space, we may never know because the booth was, apparently, un-staffed for much of the event.

The sight of the empty booth was viewed as a symbol for the struggling company by more than one KubeCon attendee, who posted pictures of it on Twitter over the course of the day. One described the sight as "sad."

Go here to see the original:
People have noticed WeWork's 'sad' empty booth at a big software developer conference - Business Insider

Red Hat CodeReady Workspaces 2 Brings New Tooling to Cloud-Native Development – Business Wire

RALEIGH, N.C. KUBECON NA 2019--(BUSINESS WIRE)--Red Hat, Inc., the world's leading provider of open source solutions, today announced the release of Red Hat CodeReady Workspaces 2, a cloud-native development workflow for developers. The new release of CodeReady Workspaces enables developers to create and build applications and services in an environment that mirrors that of production, all running on Red Hat OpenShift, the industry's most comprehensive enterprise Kubernetes platform.

Todays organizations can use Kubernetes to create and deploy their applications and services, but for developers, Kubernetes adds new challenges to an already complex development workflow. With CodeReady Workspaces, development teams can collaborate more efficiently by centralizing development environment configuration and working in replicable OpenShift containers for development work.

CodeReady Workspaces 2 builds on the features developers loved in the first release the powerful in-browser integrated development environment (IDE), centralized one-click developer workspaces, Lightweight Directory Access Protocol (LDAP), Active Directory (AD), OpenAuth support and more along with several new tools and services, including:

CodeReady Workspaces enables development teams to set up and work in Kubernetes by hosting configurations that define source code, build environment runtimes, and development tools. With the in-browser IDE, source code remains centrally hosted improving security without sacrificing the speed you need to stay productive. An administrative dashboard means administrators supporting developer teams have centralized management tools and dashboards to monitor CodeReady Workspaces and developer workspace performance.

As part of the Red Hat portfolio, CodeReady Workspaces is supported by Red Hats award-winning enterprise support for developer workspace tooling.

CodeReady Workspaces is included in Red Hat OpenShift and will be available in the OpenShift OperatorHub in the coming weeks.

Supporting QuotesBrad Micklea, vice president of Developer Tools, Developer Programs, and Advocacy, Red HatAs more organizations are adopting Kubernetes, Red Hat is working to make developing in cloud native environments easier offering the features developers need without requiring deep container knowledge. Red Hat CodeReady Workspaces 2 is well-suited for security-sensitive environments and those organizations that work with consultants and offshore development teams.

Ivan Krni, head of software development, CROZAs a software development company, we provide custom services for our clients and it is important for us to be able to transition between multiple technologies any time we need. With Red Hat CodeReady Workspaces, were able to manage a large number of projects and contain all the tools we need with the workspace and not worry about installing tools whenever a developer uses a different workstation or hardware, keeping our code and machines secure.

Arnal Dayaratna, research director, IDCBy providing a centralized environment that contains all of the developer tools needed to develop, build, test and debug Kubernetes-based applications, CodeReady Workspaces streamlines and simplifies adoption of Kubernetes. CodeReady Workspaces 2 facilitates the development of container-native, Kubernetes-based applications by empowering developers to leverage an updated user interface and VSCode extensions, in addition to enriched functionality for sharing development workspaces. As cloud-native development accelerates in the enterprise, developer tools such as CodeReady Workspaces are likely to experience increased importance because of their ability to simplify Kubernetes development.

Additional Resources

Connect with Red Hat

About Red Hat, Inc.Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

Forward-Looking StatementsCertain statements contained in this press release may constitute "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: risks related to the ability of the Company to compete effectively; the ability to deliver and stimulate demand for new products and technological innovations on a timely basis; delays or reductions in information technology spending; the integration of acquisitions and the ability to market successfully acquired technologies and products; risks related to errors or defects in our offerings and third-party products upon which our offerings depend; risks related to the security of our offerings and other data security vulnerabilities; fluctuations in exchange rates; changes in and a dependence on key personnel; the effects of industry consolidation; uncertainty and adverse results in litigation and related settlements; the inability to adequately protect Company intellectual property and the potential for infringement or breach of license claims of or relating to third party intellectual property; the ability to meet financial and operational challenges encountered in our international operations; and ineffective management of, and control over, the Company's growth and international operations, as well as other factors. In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic and political conditions, governmental and public policy changes and the impact of natural disasters such as earthquakes and floods. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of this press release.

Red Hat, the Red Hat logo, and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries.

Go here to see the original:
Red Hat CodeReady Workspaces 2 Brings New Tooling to Cloud-Native Development - Business Wire

FanX’s Bryan Brandenburg Unveils ‘Zenerchi’ That Aims to Change the Way In Which We Visualize Human Physiology – Grit Daily

Known for his impressive work alongside FanX Salt Lakes Dan Farr, co-founder Bryan Brandenburg is turning his superhero-like mind towards Zenerchi, a project that has been in the works for decades, going back to his years and involvement in the biotech industry, long before the 2013 birth of FanX.

the human body, along with everything else in the universe, is the most beautiful, elegant creation by far. We think were smart by designing skyscrapers and Teslas and solar panels but compared to the elegance of the universe and the human body, were light years away from that kind of design. What were doing is creating access that makes it possible to simulate and visualize the elegance and beauty of the human body in a way that has never been done before.

Zenerchi, formulated from the words, zen, energy, and chi can be applied to a wide variety of use-cases, far beyond just the biomedical and pharmaceutical sectors. It gives superpowers to every entrepreneur, medical professional, legal practitioner, and student, regardless of industry, to help lend x-ray vision to unmasking the atoms and data that comprise the human anatomy.

The company plans to unveil its first public-facing tech implementation next year with a simulation visualization AI lab, according to its October announcement.

In an exclusive interview with Grit Daily, Brandenburg unveiled and dove into Zenerchi. In the companys latest news, Zenerchis platform is built upon open source physiology simulation software that was developed by teams at Stanford University, MIT, Indiana University, Harvards Medical Schooland oh yeah, the U.S. Department of Defense.

Grit Daily, which has covered FanX Salt Lake for the past two years, is very familiar with Brandenburgs background. We found this project to be quite the game-changer once unveiled to the public and it can bring a positive if not paradigm shifting impact on wellness, biomedical, pharmaceutical, and legal research.

Grit Daily: Lets talk about your transition away from FanX Salt Lake into the world of Zenerchi and physiology. Can you walk us through your background?

Bryan Brandenburg: I went to college in math and physics after repairing computers and electronics on fighter jets for the Air Force. I dropped out about one semester short of a double degree in math and physics to start writing video games on the Commodore 64 because I thought video games were the future, and I wanted to be part of it on the ground floor. Turned out the timing was good and I had some really good success. I started a company called Sculptured Software, which was later acquired by Acclaim, and worked on IBM PC, Atari, Amiga and Apple, too. I had a great gaming career.

GD: You have extensive experience in the gaming sector. Can you talk about that?

BB: From my involvement with Sculptured Software/Acclaim, I then started another game company that was acquired by another publicly traded company. There I executive produced products for Disney and Hasbro. Fast forward five years later, I was Chief Profit Officer at DAZ3D creating 3D software and 3D models with Dan Farr, my partner at FanX. The company built a library of about 10,000 models and 3D software and sold these products online to about a million artists and animators worldwide on the Internet. After that, I took over as CEO of Zygote Media Group, who does world-class anatomy and visualization for physiology.

Brandenburg describes most of his career as being a scientist with real-time high-performance graphics and scientific visualization expertise:

I really got to thinking about when I was at Zygote, the whole concept of fractal physiology. It was something that they didnt have an appetite for at the time. It was probably too early. I really thought there was a great opportunity to visualize the human body from gross anatomy all the way down to atoms and quarks.

But it wasnt until 2012 that Farr brought Brandenburg into his vision of creating a comic con that set itself apart from the almost infinite number of comic cons out there today.

Back in 2012, Dan and I were actually working on a 3D software company when he said, Lets start a comic con. I really was familiar with them but had never been to one. I went to one in Portland with him and thought it was pretty fun. I thought I would help him get it started and get back to software.

Yet, successful as FanX was and still is, the Utah-based comic-con still stirred pots outside of the state.

This group down in San Diego decided that even though there were 150 comic cons out there, we were somehow causing problems, Brandenburg explained, and added that [he] spent a lot of time helping Dan get through all of that.

But Brandenburg anticipates FanXs final chapter is likely to be a happy one for he and Farr in the Ninth Circuit Appeals Court.

I felt like we got through that very successfully. The company is in a good place and is very strong now. I was able to devote myself full-time starting early this year.

For the average millennial who may be in school, whether its medical school or nursing school, our question was how this technology could, if at all, be implemented into and utilized by the millennial demographic?

Thats a great question, Brandenburg responded, adding his even better answer:

I have millennial children. One of them is a nurse whose mother is a nurse and grandmother is a nurse. I have another daughter who has a Masters in Speech Therapy. My son-in-law has a masters degree in social work and works in mental health. I think the great opportunity for young people in why this is the future for you and for everybody isI believe that augmented reality is a computing platform of the future. We are building a platform based on augmented reality, virtual reality, high-end visualization, the kind you would see in Fortnight and video games of todays era. Young people today want to reach their highest potential physically, mentallywere creating tools to do that like never before.

GD: So, lets take the wearables industry for exampleapply some Zenerchi!

BB: If you think about how best to understand your body, there is the whole world of wearables for wellness and fitness. Just last week, Google announced it bought Fitbit for a little over $2 billion. I think that is very much an indicator of where all of this is going. People using the Internet of Things (IoT) and advanced technologies like augmented reality (AR) and VR are going to be able to understand their body more than ever. The person who is going to live to 150 years oldhes alive today. He is probably a millennial. We think were going to be creating world-class visualization simulation AI tools that will bring in a whole new era of understanding of the human body.

GD: In our full conversation, you mentioned the FDA and being able to work alongside it, rather than against it. Can you expand?

BB: The FDA requires testing right now on real people. We think we can complement FDA clinical trials with tests on simulated people. Instead of doing 1,000-10,000 at a time, we can do one million-10 million at a time using genome data and an understanding of human physiology that has never been had before. We can use the same technology to test products that cant afford FDA approval like nutritional supplements.

From a lawyers perspective, I have my own criminal defense practice. I used to intern and work at a personal injury and medical malpractice firm, where, as many know, is all about understanding the mechanics of the human body and anatomy. In todays digital age, utilizing visual aids in the courtroom is a major advantage and almost essential in communicating effectively with the jury.

GD: Have you thought about how this could eventually be used in a courtroom, where it could be a licensed platform for the legal system?

BB: Absolutely. To go back to the late 90s, I sold my game company to a publicly traded company called Engineering Animation. They have since been acquired, but they were one of the premier companies in the country that did visualization for the legal industry for court. They visualized things like the Oklahoma City bombing, many accidents and medical malpractice situations. Even back then, I was thinking, this is amazing that we can recreate the impact on the human body in a video animation.

GD: So, lets apply real-time to that equation.

BB: Now, what were doing today is doing it in real-time and being able to simulate in real-time multiple outcomes. This is right on track and an area where I do have experience. For us back then, it was almost a $100 million-a-year business to do visualizations for CNN and the courtroom and pharmaceutical companies to demonstrate how their drugs worked. This is right on track for being able to utilize a real-time simulation platform to do the same thing.

Transitioning from the courtroom and adding in a potential use case, whether in the courtroom or outside of it with respect to medical education, Brandenburg presented us with such a scenario:

For example, we have, cardiovascular simulations that actually analyze and visualize the blood flow through the arteries. As part of that simulation, we can add medical devices like stints into the artery and examine how it will affect blood flow and prevent strokes from blood clots in the arteries. Being able to visualize that with the particular medical device companies, a stint, is super valuable, where you could play, here is a variety of patients, and you can simulate across multiple patients how that device would perform and validate or invalidate the case in question.

In terms of medical education, we have developed an interface into our practical physiology where across all of the major anatomy systems from cardiovascular, circulatory, nervous system, digestive, endocrine, and so on, we can drill down from major to minor systems going from the circulatory system to the heart and arteries to heart tissue to heart cells to the proteins and molecules that make up the tissue all the way down to the atomic and subatomic level.

That kind of visualization is now available on modern devices but we can drill down in a fantastic voyage paradigm and provide a level of understanding that is unprecedented. If you look at where all the breakthroughs are being made in the medical community, its not about gross anatomy medical products anymore; its about pharmaceutical products that are re-engineering and creating new proteins that create a paradigm shift in the biochemistry and physiology. Being able to visualize at a protein and molecular level is the right thing at the right time.

With the support from universities such as Stanford, MIT, Indiana University, Harvard Medical School, and the U.S. Department of Defense, Brandenburg explained the collective vision behind Zenerchi.

What we discovered in the initial planning of this company was that there were over 1,000 physiology simulation and visualization software products being developed by world class organizations, including the ones you mentioned, he explained.

We systematically evaluated which ones were being utilized and would have long-term importance, which ones had the most activity and the most potential. We said, You know what? We have the opportunity to create a holistic platform where you have a brain simulator, a cell simulator, a molecular dynamic simulator, a cardiovascular simulator, that were all very different designs, and none of them talk to each other. We designed a platform where we could systematically bring in the open source software that are freely available, as long as we credit organizations, and then create an interface between the modules so that not only are they much more accessible than they were before and produce much more meaningful visual results, but now they are going to start to talk to each other.

GD: What was the general feedback youve had from these universities?

BB: We have been reaching out to universities, including Stanford, MIT, and Harvardwe have relationships there and theyre excited about our ability to take their babies and paradigm shift them into the 21st century with modern technologies like cloud technology and real-time gaming engines like Unreal and Unity.

GD: The software itself is open-source. Can you explain to readers how that works?

BB: As a lawyer, if you understand the nature of open source, most of the licenses are the MIT open source license, which basically anybody can use it, just credit the creator. If you modify the core source code, contribute it back to the open source project, which were adhering to. The intellectual property which were building which will make it extremely valuable or already has is the visualization, simulation, AI in a cohesive platform, where if you think of these as self-contained simulation packages within a broader platform solution, with a visualization and artificial intelligence layer on top of that, that is where our IP gets very valuable.

GD: How could you make this more of a license to those who wished to have more specialized access?

BB: Can we make the individual units, for example, Wholecell from Stanford or the SIM vascular project from Stanford or the protein prediction software out of MIT, can we make that more valuable? Absolutely. That is an open-source product. But the real opportunity for us and the community as well is the broad platform of connecting the simulation products and providing a world-class visualization solution, taking all that data into the cloud, and then being able to, using artificial intelligence and machine learning, learn from the simulations and predict results that will create opportunities in medical treatment and disease prevention.

While exciting, the companys software also presented a very realistic concern that anyone in todays digital age would be curious ondata protection and privacy. With companies such as Facebook, Capital One, and other tech giants finding themselves victimized by data breaches, its a question that doesnt go unspoken.

Brandenburg touched on his four-year history at Symantec, the company known for its Norton security suite of utility products.

GD: Touching on the cloud, how does cybersecurity and data come into play herefrom a software development standpoint and of course rolling out to the general public?

BB: While working at Symantec, I worked at the Peter Norton group as the external development manager for the Norton family of products. I wrote the first security business plan that then became the core business for the company. I had a small contribution to that, but I spent a lot of time visiting with people like New Scotland Yard and the Dutch computer crime unit as part of my work. I have good background in cybersecurity. Two of the members of my team are Symantec veterans as well. We understand that medical data and personal data is going to be ultra-important. Our solution that is in the works is going to use blockchain to protect that data.

GD: Moving forward, what would you consider to be one or some of the bigger challenges that you feel the company currently faces before its public unveiling? In other words, whats the next obstacle to overcome in the companys journey moving forward?

BB: The biggest obstacle to come? I think the more we get into this, the more we realized what an amazing opportunity it is. The total addressable market is $10 trillion. If everything goes right, we might exceed .01% of that. Its a ginormous market. We think we have the right product at the right time.

The biggest challenge is its a $10 trillion market so there are lots of well funded companies in the space. We have to be very strategic with the funding we raise and the product we develop. We have to be wise and choose the right direction in the sea of very large indirect competitors. We have to carefully position ourselves to find our place within that $10 trillion market and be able to get traction and get to market in a way that is meaningful.

GD: On the flip side, what do you think is the biggest strength from this project, from the team youve seen thus far?

BB: Our biggest strength isIm not going to disclose some of the key things because theyre proprietary, but I do think that our approach for fractal physiology is one of a kind. We havent identified anyone who has taken this approach to medical education, to simulation, to visualization. People are focused on gross anatomy or cells or molecules and proteins, but nobody is connecting the dots like we are planning to. Its very much a fractal world in the human body, where you have to pay attention to what is going on at an atomic level, a protein level, a cellular level. All the systems are connected in a meaningful way. Our strategy of bringing Electronic Arts veterans to visualize in a high-performance way along with medical doctors and medical illustrators and animators, and our vision for the product, nobody has our unique vision and we have lots of meaningful experience to execute on that vision. Im certain of that. That is our strength.

GD: In terms of education, do you see in the future having some internal education programs for- Lets say youre hiring a new candidate or an intern or whomever may be potentially joining this product. What if somebody isnt as familiar with the blockchain or the experience of blockchain and AR and VR? Do you think having programs in place, if you dont already, could be valuable? Is that something you guys have thought about at this point with regards to the project itself?

BB: No. We take a little different approach. My strategy for building successful companies has been to go down the path of hiring really smart people who love to learn, to hire Renaissance people, that already are polymaths in their own right, in their own area. People that have no problem saying, Oh my gosh, I get to learn about blockchain at work, or, Awesome! I get to learn about human physiology, and be excited about that. There isnt any formal training other than hiring people who love to learn and are really smart.

GD: Whether youre speaking to the millennial demographic (including your children), or the average entrepreneur, what do you want individuals to take away from Zenerchi?

BB: If I could choose one takeaway for the vision of our company in the context of our conversation, that would be that the human body, along with everything else in the universe, is the most beautiful, elegant creation by far. We think were smart by designing skyscrapers and Teslas and solar panels but compared to the elegance of the universe and the human body, were light years away from that kind of design. Were creating access that makes it possible to simulate and visualize the elegance and beauty of the human body in a way that has never been done before.

Go here to see the original:
FanX's Bryan Brandenburg Unveils 'Zenerchi' That Aims to Change the Way In Which We Visualize Human Physiology - Grit Daily

GitHub on mission to secure the world’s open source software – Technology Decisions

Securing the worlds open source software is a formidable mission and one that GitHub has chosen to accept.

On 14 November, the hosting giant launched GitHub Security Lab a platform designed to empower people to secure open source code.

Through the platform, participants can access GitHubs analysis engine, CodeQL, which helps users find and eradicate vulnerability-causing code, as well as thousands of hours of security research, according to a blog post by GitHubs Vice President of Product Management, Security, Jamie Cool.

Users can also earn bounties of up to US$3000 for writing new CodeQL queries that find multiple, or a class of, vulnerabilities in open source code with high precision.

Cool said these tools would help the Labs security researchers, maintainers and partner companies such as Google, Intel, Microsoft and VMWare fight challenges of scale, expertise and coordination.

The JavaScript ecosystem alone has over one million open source packages. Then theres the shortage of security expertise: security professionals are outnumbered 500 to one by developers. Finally theres coordination: the worlds security experts are spread across thousands of companies, he said.

Lab researchers have already found and published 105 common vulnerabilities and exposures (CVEs), according to the site.

As more vulnerabilities are discovered, participants and end users will need better tools to handle them, Cool said.

Currently, Forty percent of new vulnerabilities in open source dont have a CVE identifier when theyre announced, meaning theyre not included in any public database. Seventy percent of critical vulnerabilities remain unpatched 30 days after developers have been notified, he said.

GitHub expects the Lab to help improve responses to newly discovered vulnerabilities by ensuring they are only announced when maintainers have fixed affected code and developers can quickly update affected software.

Lab intends to boost project participation through events and sharing of best practices.

Image credit: stock.adobe.com/au/maciek905

Go here to see the original:
GitHub on mission to secure the world's open source software - Technology Decisions

GitHub Wants to Preserve Open-Source Code in the Arctic World Archive – Beebom

GitHub has announced the Arctic Code Vault, the Microsoft-owned companys new project that aims to archive all the open-source software. To make this possible, GitHub has teamed up with the Long Now Foundation, the Internet Archive, the Software Heritage Foundation, Arctic World Archive, Microsoft Research, the Bodleian Library, and Stanford Libraries.

It is a hidden cornerstone of modern civilization, and the shared heritage of all humanity. The mission of the GitHub Archive Program is to preserve open source software for future generations., mentions GitHub on its website.

The first snapshot of every active public GitHub repository will be captured on February 2, 2020. The collected data will be preserved in the Arctic World Archive (AWA), a decommissioned coal mine situated in Svalbard archipelago, Norway.

In addition to this, GitHub has formed an advisory panel including experts in various fields like anthropology, archaeology, history, linguistics, archival science, and futurism to help them decide the contents to be included in the archive.

The snapshot will consist of the HEAD of the default branch of each repository, minus any binaries larger than 100kB in size. Each repository will be packaged as a single TAR file, states GitHub.

In case youre wondering, the data captured will be stored on 3,500-foot film reels encoded by Piql, a company that has expertise in this field. These films are rated to last for 500 years but aging tests hint that they could last up to 1000 years.

In other news, we saw the Microsoft research team collaborate with Warner Bros. to store the 1978 iconic Superman movie on a piece of glass made of quartz as part of Project Silica, the tech giants research project.

More here:
GitHub Wants to Preserve Open-Source Code in the Arctic World Archive - Beebom