Facebook to use artificial intelligence in bid to improve renewable energy storage – CNBC

Facebook and Carnegie Mellon University have announced they are trying to use artificial intelligence (AI) to find new "electrocatalysts" that can help to store electricity generated by renewable energy sources.

Electrocatalysts can be used to convert excess solar and wind power into other fuels, such as hydrogen and ethanol, that are easier to store. However, today's electrocatalysts are rare and expensive, with platinum being a good example, and finding new ones hasn't been easy as there are billions of ways that elements can be combined to make them.

Researchers in the catalysis community can currently test tens of thousands of potential catalysts a year but Facebook and Carniegie Mellon believe they can increase the number to millions, or even billions, of catalysts with the help of AI.

The social media giant and the university on Wednesday released some of their own AI software "models" that can help to find new catalysts but they want other scientists to have a go as well.

To support these scientists, Facebook and Carnegie Mellon have released a data set with information on potential catalysts that scientists can use to create new pieces of software.

Facebook said the "Open Catalyst 2020" data set required 70 million hours of compute time to produce. The data set includes "relaxation" calculations for a million possible catalysts as well as supplemental calculations.

Relaxations, a widely used measurement in catalysis, are calculated to see if a particular combination of elements will make a good catalyst.

Each relaxation calculation, which simulates how atoms from different elements will interact, takes scientists around eight hours on average to work out, but Facebook says AI software can potentially do the same calculations in under a second.

If you study catalysis, "that's going to dramatically change how you do your work and how you do your research," said Larry Zitnick, a research scientist at Facebook AI Research, on a call ahead of the announcement.

In recent years, tech giants like Facebook and Google have attempted to use AI to speed up scientific calculations and observations across multiple fields.

For example, DeepMind, an AI-lab owned by Google parent Alphabet, developed AI software capable of spotting tumors in mammograms faster and more accurately than human researchers.

Read the original post:
Facebook to use artificial intelligence in bid to improve renewable energy storage - CNBC

SparkCognition Advances the Science of Artificial Intelligence with 85 Patents – PRNewswire

AUSTIN, Texas, Oct. 12, 2020 /PRNewswire/ --SparkCognition, the world's leading industrial artificial intelligence (AI) company, is pleased to announce significant progress in its efforts to develop state of the art AI algorithms and systems, through the award of a substantial number of new patents. Since January 1, 2020, SparkCognition has filed 29 new patents, expanding the company's intellectual property portfolio to 27 awarded patents and 58 pending applications.

"Since SparkCognition's inception, we have placed a major emphasis on advancing the science of AI through research making advancement through innovation a core company value," said Amir Husain, founder and CEO of SparkCognition, and a prolific inventor with over 30 patents. "At SparkCognition, we've built one of the leading Industrial AI research teams in the world. The discoveries made and the new paths blazed by our incredibly talented researchers and scientists will be essential to the future."

SparkCognition's patents have come from inventors in different teams across the organization, and display commercial significance and scientific achievements in autonomy, automated model building, anomaly detection, natural language processing, industrial applications, and foundations of artificial intelligence. A select few include surrogate-assisted neuroevolution, unsupervised model building for clustering and anomaly detection, unmanned systems hubs for dispatch of unmanned vehicles, and feature importance estimation for unsupervised learning. These accomplishments have been incorporated into SparkCognition's products and solutions, and many have been published in peer-reviewed academic venues in order to contribute to the scientific community's shared body of knowledge.

In June 2019, AI research stalwart and two-time Chair of the University of Texas Computer Science Department, Professor Bruce Porter, joined SparkCognition full time as Chief Science Officer, at which time he launched the company's internal AI research organization. This team includes internal researchers, additional talent from a rotation of SparkCognition employees, and faculty from Southwestern University, the University of Texas at Austin, and the University of Colorado at Colorado Springs. The organization works to produce scientific accomplishments such as: the patents and publications listed above, advancing the science of AI, and supporting SparkCognition's position as an industry leader.

"Over the past two years, we've averaged an AI patent submission nearly every two weeks. This is no small feat for a young company," said Prof. Bruce Porter. "The sheer number of intelligent, science-minded people at SparkCognition keeps the spirit of innovation alive throughout the research organization and the entire company. I'm excited about what this team will continue to achieve going forward, and eagerly awaiting the great discoveries we will make."

To learn more about SparkCognition, visit http://www.sparkcognition.com.

About SparkCognitionWith award-winning machine learning technology, a multinational footprint, and expert teams, SparkCognition builds artificial intelligence systems to advance the most important interests of society. Our customers are trusted with protecting and advancing lives, infrastructure, and financial systems across the globe. They turn to SparkCognition to help them analyze complex data, empower decision-making, and transform human and industrial productivity. SparkCognition offers four main products: DarwinTM, DeepArmor, SparkPredict, and DeepNLPTM. With our leading-edge artificial intelligence platforms, our clients can adapt to a rapidly changing digital landscape and accelerate their business strategies. Learn more about SparkCognition's AI applications and why we've been featured in CNBC's 2017 Disruptor 50, and recognized four years in a row on CB Insights AI 100, by visiting http://www.sparkcognition.com.

For Media Inquiries:

Michelle SaabSparkCognitionVP, Marketing Communications[emailprotected]512-956-5491

SOURCE SparkCognition

http://www.sparkcognition.com

The rest is here:
SparkCognition Advances the Science of Artificial Intelligence with 85 Patents - PRNewswire

St. Louis Is Grappling With Artificial Intelligence’s Promise And Potential Peril – St. Louis Public Radio

Tinus Le Rouxs company, FanCam, takes high-resolution photos of crowds having fun. That might be at Busch Stadium, where FanCam is installed, or on Market Street, where FanCam set up its technology to capture Blues fans celebrating after the Stanley Cup victory.

As photos, theyre a fun souvenir. But paired with artificial intelligence, theyre something more: a tool that gives professional sports teams a much more detailed look at whos in the audience, including their estimated age and gender. The idea, he explained Thursday on St. Louis on the Air, is to help teams understand their fans a bit better understand when theyre leaving their seats, what merchandise are they wearing?

Now that the pandemic has made crowd size a matter of public health, Le Roux noted that FanCam can help teams tell whether the audience has swelled past 25% capacity or how many patrons are wearing masks.

But for all the technologys power, Le Roux believes in limits. He explained that he is not interested in technology that would allow him to identify individuals in the crowd.

We dont touch facial recognition. Ethically, its dubious, he said. In fact, Im passionately against the use of facial recognition in public spaces. What we do is use computer vision to analyze these images for more generalized data.

Not all tech companies share those concerns. Detroit now uses facial recognition as an investigatory tool. Earlier this year, that practice led to the wrongful arrest of a Black man. The ACLU has now filed a lawsuit seeking to stop the practice there.

Locally, Sara Baker, policy director for the ACLU of Missouri, said the concerns go far beyond facial recognition.

The way in which many technologies are being used, on the surface, the purpose is benign, she said. The other implication of that is, what rights are we willing to sacrifice in order to engage with those technologies? And that centers, really, on your right to privacy, and if you are consenting to being surveilled or not, and how that data is being used on the back end as well.

Baker cited the license readers now in place around the city, as well as Persistent Surveillance Systems attempts to bring aerial surveillance to the city as a potential concern. The Board of Aldermen has encouraged Mayor Lyda Krewson to enter negotiations with the company as a way to stop crime, although Baltimores experience with the technology has yet to yield the promised results.

That could involve surveillance of the entire city, Baker said. In Baltimore, that means 90% of outdoor activities are surveilled. I think were getting to a point where we need to have robust conversations like this when were putting our privacy rights on the line, because I think we have a shared value of wanting to keep some aspects of our lives private to ourselves.

To that end, Baker said shed like to see the St. Louis Board of Aldermen pass Board Bill 95, which would regulate surveillance in the city. She said it offers common sense guardrails for how surveillance is used in the city.

Other than California and Illinois, Le Roux said, few states have even grappled with technologys capabilities.

I think the legal framework is still behind, and we need to catch up, Le Roux said.

Le Roux will be speaking more about the ethical issues around facial recognition at Prepare.ais Prepare 2020 conference. The St. Louis-based nonprofit hosts the annual conference to explore issues around artificial intelligence. (Thanks to the ongoing pandemic, Prepare 2020 is now entirely virtual and entirely free.)

Prepare.ais mission is to increase collaboration around fourth-industrial revolution technologies in order to advance the human experience.

Le Roux said he hopes more tech leaders and those who understand the building blocks of technology have a seat at the table as regulations are being written. And Baker said her hope is that local governments proceed with caution in turning to new technologies being touted as a way to solve crime.

We have over 600 cameras in the city of St. Louis, she said. Weve spent up to $100,000 a pop on different surveillance technologies, and weve spent over $4 million in the past three years on these types of surveillance technologies, and weve done it without any real audit or understanding of how the data is being used, and whether its being used ethically. And that is what needs to change.

Related Event

What: Prepare 2020

When: Now through Oct. 28

St. Louis on the Air brings you the stories of St. Louis and the people who live, work and create in our region. The show is hosted by Sarah Fenske and produced by Alex Heuer, Emily Woodbury, Evie Hemphill and Lara Hamdan. The audio engineer is Aaron Doerr.

Go here to see the original:
St. Louis Is Grappling With Artificial Intelligence's Promise And Potential Peril - St. Louis Public Radio

The grim fate that could be ‘worse than extinction’ – BBC News

Toby Ord, a senior research fellow at the Future of Humanity Institute (FHI) at Oxford University, believes that the odds of an existential catastrophe happening this century from natural causes are less than one in 2,000, because humans have survived for 2,000 centuries without one. However, when he adds the probability of human-made disasters, Ord believes the chances increase to a startling one in six. He refers to this century as the precipice because the risk of losing our future has never been so high.

Researchers at the Center on Long-Term Risk, a non-profit research institute in London, have expanded upon x-risks with the even-more-chilling prospect of suffering risks. These s-risks are defined as suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In these scenarios, life continues for billions of people, but the quality is so low and the outlook so bleak that dying out would be preferable. In short: a future with negative value is worse than one with no value at all.

This is where the world in chains scenario comes in. If a malevolent group or government suddenly gained world-dominating power through technology, and there was nothing to stand in its way, it could lead to an extended period of abject suffering and subjugation. A 2017 report on existential risks from the Global Priorities Project, in conjunction with FHI and the Ministry for Foreign Affairs of Finland, warned that a long future under a particularly brutal global totalitarian state could arguably be worse than complete extinction.

Singleton hypothesis

Though global totalitarianism is still a niche topic of study, researchers in the field of existential risk are increasingly turning their attention to its most likely cause: artificial intelligence.

In his singleton hypothesis, Nick Bostrom, director at Oxfords FHI, has explained how a global government could form with AI or other powerful technologies and why it might be impossible to overthrow. He writes that a world with a single decision-making agency at the highest level could occur if that agency obtains a decisive lead through a technological breakthrough in artificial intelligence or molecular nanotechnology. Once in charge, it would control advances in technology that prevent internal challenges, like surveillance or autonomous weapons, and, with this monopoly, remain perpetually stable.

The rest is here:
The grim fate that could be 'worse than extinction' - BBC News

The Link Between Artificial Intelligence Jobs and Well-Being – Stanford University News

Artificial intelligence carries the promise of making industry more efficient and our lives easier. With that promise, however, also comes the fear of job replacement, hollowing out of the middle class, increased income inequality, and overall dissatisfaction. According to the quarterly CNBC/SurveyMonkey Workplace Happiness survey from October last year, 37% of workers between the ages of 18 and 24 are worried about AI eliminating their jobs.

But a recent study from two researchers affiliated with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) challenged this public perception about AIs impact on social welfare. The study found a relationship between AI-related jobs and increases in economic growth, which in return improved the well-being of the society.

Demand for AI-related jobs has been growing constantly in recent years, but this growth has been widely variable between cities and industry. Arizona State University assistant professor Christos Makridis and Saurabh Mishra, HAI AI Index manager and researcher, wanted to understand the effects of AI on society independent of these variables.

For this, they examined the number of AI-related job listings by city in the U.S. using Stanford HAIs AI Index, an open source project that tracks and visualizes data on AI. They found that, between 2014 and 2018, cities with greater increases in AI-related job postings exhibited greater economic growth. This relationship was dependent on a citys ability to leverage its inherent capabilities in industry and education to create AI-based employment opportunities. This meant that only cities with certain infrastructure such as high-tech services and more educated workers benefited from this growth.

Next, the researchers studied how this growth translated to well-being at a macro level using data from Gallups U.S. Daily Poll, which surveys 1,000 different people each day on five components of well-being: physical, social, career, community, and financial. The researchers studied the correlation between the number of AI jobs and the poll results, controlling for many factors, such as demographic characteristics of a population and presence of universities in a given city. They found that AI-related job growth mediated by economic growth was positively associated with improved state of being, especially for physical, social, and financial components.

This was a surprising finding given the publics concern over AIs potentially adverse effects on quality of life and overall happiness.

The researchers believe that their study is the first quantitative investigation of the relationship between AI and social well-being. While their findings are intriguing, they are also correlative. The study cant conclude whether AI is the cause of the observed improvement in well-being.

Nevertheless, the study makes an important and unique contribution to understanding the impact of AI on society. The fact that we found this robust, positive association, even after we control for things like education, age, and other measures of industrial composition, I think is all very positive, Makridis says.

Their findings also offer a course of action to policymakers. The researchers suggest that city leaders introduce smart industrial policies, such as the Endless Frontier Act, to support scientific and technological innovation through increased funding and investments targeted for AI-based research and discovery. These policies along with ones that promote higher education can help balance the economic inequality between cities by providing them with opportunities to grow.

Given that [cities] have an educated population set, a good internet connection, and residents with programming skills, they can drive economic growth, Mishra says. Supporting the AI-based industry can improve the economic growth of any city, and thus the well-being of its residents.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.Learn more.

View original post here:
The Link Between Artificial Intelligence Jobs and Well-Being - Stanford University News

The threats of open source software in cloud native – ITProPortal

The use of open source has become a crucial part of the modern development process. It helps everyone build software faster than ever, but it also puts developers at risk of unknowingly introducing security vulnerabilities into the applications they develop, and placing their own organization or their customers in jeopardy. A single bug in one of the code dependencies can affect an entire application, making it susceptible to compromise. In a cloud native world, these risks are amplified due to the continuous stream of potentially vulnerable code.

Over the past few years, open source has seen massive growth. Today, over 2.5 million developers contribute to open source on platforms like GitHub, which is the largest open source community in the world. It hosts millions of open source projects, some of them not only having thousands of contributors but also acting as dependencies for numerous other repositories. Following the popularity of open source, its use in commercial software has also surged. According to the Synopsys2020 Open Source Security and Risk Analysis Report, open source components and libraries are the foundation of literally every application in every industry.

The reality is that building an app completely from scratch these days is extremely rare. The proliferation of DevOps and cloud computing created a new, dynamic and fast-paced environment. More and more organizations are adopting containers and microservices to build cloud native applications, taking advantage of the speed and agility these new technologies have to offer. Amazon engineers allegedly deploy code every 11.7 seconds whereas Netflix engineers deploy code thousands of times per day.

To keep up with this velocity, developers are increasingly using open source packages and libraries throughout the software lifecycle. It is estimated that 99% of todays codebases contain open source components, and up to 70% of enterprise code is open source. Building software is becoming more like assembling Lego cubes from the ready-made open source components - with the remaining code largely serving as a glue.

The explosive growth of both open source and cloud native is not accidental. Open source projects dominate the technology scene across an entire cloud native compute stack: Docker, containerd, Kubernetes, GitLab, Jenkins, MySQL, Redis, Harbor, Rancher, and many others. The cloud native world is essentially powered by open source which, in turn, allows for velocity and rapid development lifecycles.

While open source brings many benefits, it also introduces new and potentially serious security threats for organizations.

Open source software is free for anyone to use, explore, and enhance. Developed and maintained by large numbers of contributors, it provides unlimited access to the source code, so that users can modify it as they wish. Like any software, open source is written by humans and, hence, has bugs. According to Sonatype 2020 State of the Software Supply Chain Report, one in ten open source software downloads are vulnerable, and on average there are 38 known open source vulnerabilities per application.

The main reason behind the vulnerability of open source lies exactly in its public nature. Numerous developers can get involved with little vetting - including bad actors. Open source is, well, open and built by sometimes unaccountable contributors. While security awareness within the open source community is slowly improving and there have been initiatives to address security issues (e.g. GitHub Security Lab), the lack of central control gives plenty of opportunities for attackers to find holes and vulnerabilities. The governance of open source software varies quite a lot, and with the exception of the very popular, commercially-driven projects, one cannot simply rely on quality and security standards to be applied equally rigorously across the board. Unlike commercial software, theres no standardized process for dealing with new updates and fixes, and often open source projects lack resources to find and patch bugs. Who knows how many unpatched vulnerabilities there are?

Driven by the fast pace of DevOps, developers who pull and use code from public repositories can unknowingly be incorporating vulnerable, risky, unlicensed, or out-of-date components into their project. 75% of commercial codebases contain open source security vulnerabilities, and nearly half contain high-risk vulnerabilities. Because of the distributed nature of the open source, a vulnerability can remain undetected for a long time, and an attacker exploiting it can do so for a significant period.

In a cloud native environment, containers add another layer of risk on top of open source concerns. For example, 86% of images from Docker Hub were found to be configured to run with root privileges as their default. This broadens the attack surface and enables an easy way to privilege escalation. If attackers manage to exploit a vulnerability in one of the open source components inside the container running as root, they get full access to the host and the application running on it. Combine open source with a poorly configured container, and you are on a path to a security nightmare.

As more and more organizations are leveraging third-party components, attacks that leverage open source have become very attractive to bad actors. They are investing more time and resources in targeting open source components with malware to infect organizations earlier in the software supply chain. Popular package repositories, such as npm, PyPI, and RubyGems, have become for threat actors a reliable and scalable malware distribution channel. Over the past year, the sheer number of supply chain attacks has surged by 430%.

The attacks are getting more sophisticated and have the potential to reach an extremely large scale. In 2018, an attacker compromised a popular npm package, event-stream, which had at that time over 1.5 million weekly downloads and was depended on by nearly 1,600 other packages on GitHub. A malicious user managed to gain ownership of event-stream by making several significant contributions to the package and asking the original author to take over its maintenance. Any developer who used this particular dependency in their application was immediately put at risk.

We are also seeing an increase in attacks targeting the software supply chain of cloud native environments, placing hidden malware in publicly available container images. Just recently, Aquas cyber security research team, Team Nautilus, uncovered that even the SaaS services used by container developers, GitHub, Docker Hub, Circle CI and Travis CI, can be abused by attackers for cryptocurrency mining.

Despite serious risks, the use of open source is generally poorly understood and inadequately controlled by organizations. Most breaches are caused by failure to update software components that are known to be vulnerable for months or even years (remember the Equifax hack?). Even though over 85% of open source security vulnerabilities are disclosed with a fix already available, many organizations dont have the process in place to implement those patches. Enterprises are also struggling to keep up with the increased rate of reported open source vulnerabilities. In 2019, their number skyrocketed to over 6000. That makes keeping track of the newly disclosed vulnerabilities and their patches almost impossible to carry out manually, especially at scale.

Open source vulnerabilities are not going away as humans write code and will continue to make mistakes. Because of the nature of todays software development, there are always going to be third-party components used in the process. Even if an application is deployed without a vulnerability today, at some point in the future a new vulnerability may come out for a particular component version within it. The question is not whether to use open source or not, its about how can you safely take advantage of open source while addressing the risks associated with it?

To minimize the attack surface and overall risk exposure, its vital for organizations to gain comprehensive visibility and control over open source. As the use of open source continues to grow, theres a clear need for SCA (Software Composition Analysis) an automated solution to evaluate and manage the risks caused by open source vulnerabilities within third party components.

In the context of cloud native applications, SCA is the basis for more contextual and comprehensive risk-based controls. SCA tools look at open source components in general, but dont necessarily provide visibility into the image structure, dependencies, layers, or configuration, all of which may present additional issues, or amplify the risk from known vulnerabilities. And securing open source can be done using open source tools, such as the Trivy image scanner.

Finally, its not enough to identify vulnerable open source components, you need to understand their actual impact on your specific environment and, based on that, act to effectively remediate or mitigate them. As a customer told me once: Its great that I can see all the vulnerabilities in my images, but now I cant sleep at night. A holistic cloud native security strategy goes beyond identifying vulnerabilities and security issues to include a complete and automated workflow to address them.

While the widespread adoption of open source components poses serious risks, with the right strategy and tools you can efficiently address them in your cloud native CI/CD pipeline and significantly reduce the attack surface. Cloud native is a great opportunity to do security right from the outset and empower developers to be aware of security risks and deliver more secure software faster. In case vulnerable images slip through into your production environment, you will be aware of the risks and have compensating security controls in place.

Benjy Portnoy, Senior Director of Solution Architecture, Aqua Security

Go here to read the rest:
The threats of open source software in cloud native - ITProPortal

Interview: How iText adapts to the evolution of open source & PDF – IT Brief New Zealand

It is easy to take the liberties and benefits of the open source software market for granted. But pause a moment to think about the companies and people who build and shape open source into almost infinite new innovations.

As the 2000s gave rise to a boom in software development, a project called iText was born out of a need to bring open source toPDF, a document formatthat was originally tightly controlled by Adobe.

iText's VP of marketing and products, Tony Van den Zegel, says iText was a pioneer infocusing on the back-end generation of PDF documents on the fly.

In the early years we had to fight the fear, uncertainty and doubt that was related to the use of open source software at that time; open source software is not documented, the IP of open source software is unclear and open source software not being backed by a company, he says.

The decision early on to have the iText project open sourced created a viral effect and built a strong brand, recognising iText worldwide as the leading PDF library. Millions of users have downloaded the iText library and implemented it, giving rise to massive feedback on feature requests and software bugs, resulting in a very robust and mature software product.

In 2008, the PDFformat became an open standard, which meant there was more room for PDF innovation. It was the break that iText was waiting for.

iText is now renowned for its performance and dedication to enterprises that require a high volume of PDFs generated or digitally signed in a short time. This is why iTexts technology is found in everything from customer invoices to boarding passes.

iText's global lead of product and services, Andr Lemos, says its an exciting place to be. He likens it to being the provider of a cooking ingredient - there are plenty of common ways to use it, but sometimes a chef will come along and create a new dish out of thin air.

It is certainly true our customers imagination and needs are much broader than anything that we can think of by ourselves! he says.

Taking the 'opensourceness' out of our DNA would be a nigh-on impossible task, and a futile exercise as it is so ingrained in our culture. It would simply not be iText anymore, adds Lemos.

A key elementthat defines iText is thedual license model that caters for commercial licensing and open source licensing, based on copyleft AGPL. The copyleft license model opens up the source code to the open source community.

We are unique in theway that we have not built in any limitations to the capabilities of our open source technology when compared to our commercial offering. Both versions are identical from a performance viewpoint, says Van den Zegel.

Open sourcing our technology brings lots of benefits to us and the community. The community supports the development of iText technology through pull requests. We do provide premium support to our customer base but at the same time recognise that, whatever happens, our customers and users can always self-support through the availability of the source code.

As much of a clichas it seems, iText believes in the good developers are lazy mantra. If something can be automated, why would they waste their time on manual tasks? They would prefer to spend their time more efficiently.

In that context, we recently developed iText DITO, a low-code offering in the sense that the developer has to write significantly fewer lines of code to generate template-based PDFs compared to using the iText PDF software development kit. Outside of iText DITO, we still want to make the lives of software developers as easy as possible," says Van den Zegel.

Lemos adds that the company places great value in its relationships with users, whether they are open source users, software firms, or enterprises.

We try to provide our open source users with the best experience possible, whether they have technical questions managing PDF documents (we actively monitor Stack Overflow, for instance), or even better, pull requests, says Lemos.

Tofoster community involvement, iText acts as quickly as possible when open source users submit pull requests on the companys GitHub page. This essentially makes users part of the team.

Lemos explains further, When a pull request comes up, we have mechanisms in place so that they arent lost in the void of the Internet, and we also try to educate our contributors by making suggestions on how their contribution could be improved, whether it be from a documentation perspective, or the need to have tests to validate what they are submitting.

It is a lengthier process than just accepting a pull request at face value, but we know that further down the line, both our open source and commercial users alike will benefit from our quality standards.

Document management is undergoing a digital revolution, with more and more organisations needing to adopt digital document workflows to cope in the modern business world.

Van den Zegel says that a typical processincludes the capture, imaging, and management of documents.This process is being widely integrated into cloud-based solutions, driven by cloud computing. Security is a major concern, which means that organisations need to secure content across networks and the cloud.

In the near future technologies are being developed to automatically recognise document structure in unstructured PDF files enabling integration with IT systems.

The rapid growth of data within an enterprise and the various forms the data is presented in requires AI-enabled tools such as search functions that could contextually understand queries, he says.

Coming back to PDF, why does iText believe this document format is ideal for digital document workflows?

The PDF has its place in a modern document management system. In our vision PDF becomes more and more a data container and with its fixed document format it visualises consistently independently of OS, device or software application.

A key benefit is the ability to usesecure electronic signatures in a digital document workflow, a concept which has been widely adopted and is well integrated into the PDF specification. Such digital signatures can capture the intention of the individual to enter into a contract, and in addition, be used to encrypt the information and confirm the validity of the signed document.

To demonstrate the versatility of the PDF format, Van den Zegel highlights an interesting feature, the creation of portable collections, more commonly known as PDF portfolios. Introduced by the ISO committee as part of the PDF 1.7 specification, it enables multiple file types to be contained in a single PDF.

A PDF portfolio offers similar functionality to combining multiple files into a single PDF but differs in one major aspect. Simply combining files means that all the files will be converted to PDF, whereas creating a PDF portfolio preserves the files in their original file format and you can edit or modify them in their native application without removing them from the portfolio.

A crucial benefit is that if a PDF portfolio is signed with a digital signature, edits to documents will break the signature since it covers the whole PDF including the PDF portfolio and its files. Meaning many different documents can be protected by a single digital signature.

Lemos adds that the IT world never stops, so iText makes sure its research team is atthe cutting edge ofinnovation.

Based on continuous monitoring of user feedback, the company released two new products for the iText 7 SDK in this year alone, pdfRender and pdfOCR, and will shortly release pdfOptimizer.

"The challenge here is to continue to make quarterly releases of iTexts modular developer line, as well as its business line with iText pdf2Data, ourPDF data extractor, and iText DITO, our low-code document generator."

iText is also a Board member of the PDF Association, and the company has technical staff onthe ISO committee for PDF.

This way, we can ensure that we are never blindsided by any upcoming technology, or feature in the PDF space, while also actively contributing so that the ecosystem remains alive and fresh.

To learn more about iText, click here.

Read the rest here:
Interview: How iText adapts to the evolution of open source & PDF - IT Brief New Zealand

Finding Trillions In The Clouds: How Open Source Companies Can Win In A World Dominated By Amazon, Microsoft And Google – Forbes

getty

Open source is no longer just for hackers and hardcore coders. Several open source software companies including MongoDB, Elastic, Databricks, Confluent, and Hashicorp have become multibillion-dollar businesses. Their products are used by millions of in-house developers at large corporations around the world as these engineers race to build the sophisticated applications and infrastructure their companies need to stay competitive in a tech-driven world.

In my last post, I discussed how these successful companies have created viable business models based on the open core and cloud services models. But theres a catch: the big cloud service providersAmazon Web Services (AWS), Microsoft Azure, and Google Cloud Platformare also eager to generate revenue from open source code. They often offer low-cost hosted versions of popular open source software, including MongoDB and Elastic to directly compete with commercial open source companies. For example, Amazon offers a product called DocumentDB (with MongoDB compatibility) priced sharply relative to MongoDBs own cloud-hosted version of its unstructured database, called Atlas. MongoDB and other large open source companies argue their cloud offerings are more robust, have deeper connectivity to third-party applications, and come with expert service and support. All of that may be true, but that hasnt stopped AWS from finding ways to muscle in on the open source gold rush. The cloud vendors also offer the promise of scale, and the performance and availability that theyve become famous for, making them very formidable foes.

For open source founders just now establishing business model roadmaps, its increasingly important to approach co-existence with the big-three cloud providers with finesse. Thats especially true because most commercial open source companies depend on at least one cloud provider, and often all three, for hosting and distribution, so they are both partners and competitors. Getting these relationships right is critical to every open source companys long-term success; after all, billions in revenue are at stake.

There are four main strategies to deal with increased competition from cloud providers and Ill detail each below. These arent mutually exclusive - some companies are pursuing more than one of these strategies simultaneously.

Change your License

Some commercial open source companies have taken a dramatic step: MongoDB, Elastic, Confluent, and others have changed the terms of their license agreements to fight back against the cloud providers. With this bold move, open source companies can ensure the big cloud providers effectively cannot host the open source versions of their software, at least without the cloud vendors impairing their own business models. This has been a controversial decision; open source companies who change their license to make it more restrictive risk alienating current and future community members who might see such an about face as aggressive and protectionarymuch like the tactics of closed-source software companies. MongoDBs trajectory since its license shift shows, however, that such a strategy can work if the process is managed carefully.

MongoDB introduced its Atlas database-as-a-service managed cloud offering in 2016 and it has grown tremendously fast. MongoDB reported in September that Atlas now accounts for 44% of overall revenue, with over 18,800 of its 20,200 customers using Atlas as of July 31, 2020. Clearly, Atlas is a revenue stream worth protecting, so MongoDB needed to do something to ensure AWS didnt start taking significant market share away from Atlas with its lower-cost DocumentDB product. In response, MongoDB created a new server side public license (SSPL) in late 2018 that specifically limited what cloud providers could offer based on its open source software. Specifically, MongoDBs new SSPL required cloud providers to make any product based on MongoDBs open core freely available to the open-source community, thereby limiting a clouds ability to generate revenue from an SSPL-licensed version of MongoDBand further threatening that any integrated technologies offered by a cloud would also be impaired commercially. MongoDBs shift was seen by many as contrary to the very nature of open source, yet the change doesnt seem to have materially damaged MongoDBs reputation, considering how quickly its Atlas revenues have continued to grow.

I believe the reason MongoDB was able to introduce a more restrictive license without alienating its customer base is that it spent many years winning the hearts and minds of developers. MongoDBs fervent, dedicated user base understood why it had to change its license: to protect itself from the all-powerful trifecta of Amazon, Microsoft, and Google. Such cloud protection licenses have been emulated by other open source companies recently, and it will be interesting to see which companies can fend off the clouds without damaging community momentum.

Go Multi-Cloud

The clouds represent the latest in a long line of powerful technology vendors that have sought to lock in their customers, driving favorable long-term economics once such lock-in has taken hold. Companies have become highly sensitive to this ploy and most large enterprises today employ a multi-cloud strategy for just this reason. Native open source cloud services that run, or can run, on multiple clouds represent a very appealing attribute for enterprises. Many of the strongest open source companies have pursued a multi-cloud strategy, affording a huge advantage versus single-platform cloud offerings of their open source versions. The cloud vendors want to lock in enterprises, so selling against that chokehold is very compelling.

Cooperate

Several commercial open source companies have tried a more cooperative approach: if you cant beat em, join em. Instead of trying to fight against the cloud providers, theyve found ways to create mutually beneficial business models. A great example of this strategy is how Databricks works with Microsoft Azure. Databricks users can procure its products directly through their Azure accounts. Essentially, Azure has become a distributor for Databricks products, taking a cut of the revenue. Azure also gets the benefit of selling more computing infrastructure and storage with each new Databricks-on-Azure cluster. Of course, the challenge to this model is having enough leverage to negotiate a fair revenue-sharing agreement. Databricks is a highly successful open source company with a massive, committed community of developers. The company is now valued at over $6B, so it has the market share and clout to negotiate with Microsoft. On the other hand, up-and-coming cloud providers run the risk of being steamrolled, or ignored completely, by the big cloud players.

While cooperating can be risky, its important to remember that the cloud vendors arent monolithic entities; theyre actually loose federations of sometimes diverging competing units. For example, Databricks drives significant compute and storage revenue on Azure, helping these most important businesses thrive. Finding ways to drive value for a cloud partner is critical to developing a successful partnership, but taking care to not become overly dependent on one cloud is of paramount importance as well.

Build an Ecosystem

One of the best ways to ensure the big cloud players dont poach your customers is to make your product far more useful and valuable than theirs ever could be. The ecosystem approach entails fostering an interconnected network of software integrations with your product. When many third-party software companies build extensions to your product, it becomes more valuable, since it can then work seamlessly with many different enterprise applications. A cloud provider may be able to offer a lower-cost version of your product, but if your offering has many connectors to other applications, it will be worth the extra cost for enterprise users. A good example of an open source company that has succeeded with this model is HashiCorp. By working closely with its dedicated community for many years, Hashicorp has supported developers to create over 400 extensions to its Terraform platform. To date, the cloud vendors havent been able to offer successful competing Terraform-based products because its nearly impossible to compete with the real Terraform that has hundreds of integrations to other products.

What all four of these strategies have in common is that they take planning and foresight. You cant just change your license on the fly without having spent years building up goodwill among a huge committed user base, nor can you build out a multi-cloud solution nor create the market share needed to negotiate favorable revenue-sharing deals with 300-pound gorillas overnight. And it takes years of developer support and encouragement to see integrations with your product appear on a wide scale. Thats why its critical for every open source entrepreneur building a company today to think ahead about how to handle the complex relationships with Amazon, Microsoft, and Google.

Note: My firm, GGV Capital, is invested in and I am a board member of HashiCorp.

Thanks toAghi Marietti,Armon Dadgar,Dave Kellogg,Dave McJannet,Erica Schultz,Jay Kreps,Joseph Jacks,Marco PalladinoandReza Shafiifor their kind and patient assistance on this series of posts.

Read the rest here:
Finding Trillions In The Clouds: How Open Source Companies Can Win In A World Dominated By Amazon, Microsoft And Google - Forbes

Rocket Insights Joins PathCheck Foundation’s Global Partner Program to Develop Open Source Software to Help Contain the Pandemic – PRNewswire

BOSTON, Oct. 13, 2020 /PRNewswire/ -- Rocket Insights, part of Dept, today announced it has joined PathCheck Foundation'sglobal partner program, PathCheck Alliance, as a founding member. Rocket Insights is working with PathCheck Foundation to develop and implement exposure notification and digital contact tracing solutions to help contain COVID-19.

PathCheck Foundation was spun out of MIT in March 2020 to build digital solutions to contain COVID-19 and revitalize the economy, while protecting individual privacy and liberty. PathCheck supports the Google Apple Exposure Notification system and a range of other technologies to help slow the spread of COVID-19. Teams in seven U.S. states and countries are implementing PathCheck technology to create exposure notification mobile apps for their communities, including Hawaii, Guam, Puerto Rico, and Cyprus.

"Early in the pandemic, our team was eager to find ways we could leverage our skills to help contain the spread of the virus," said Ashley Streb, partner at Rocket Insights. "We were introduced to the PathCheck Foundation and jumped at the opportunity to provide product strategy, design and engineering to support the foundation's vision of creating digital solutions to fight the pandemic."

PathCheck partnered with Rocket Insights to design and build an app for digital contact tracing using privacy-preserving GPS data. This was followed by a second, separate app for exposure notification using the Google Apple Exposure Notification framework. This app uses Bluetooth to anonymously and securely identify when phones have been near each other and potential COVID-19 exposures have occurred (note: these capabilities are entirely optional and privacy preserving). The apps enable communities to work together to stop the spread of the virus, and complement other public health strategies, including manual contact tracing efforts.

Rocket Insights is working with PathCheck to build more features into the system to support the full spectrum of digital pandemic response needs, including leveraging the company's experience in developing the PathCheck open source exposure notification app to help other states, governments and countries implement it within their communities.

"We have an incredible network of partners helping develop open source software that is being used by states and countries to help contain the pandemic," said Adam Berrey, chief executive officer at PathCheck Foundation. "Rocket Insights was one of our first partners to come on board, and without them, we would not have shipped our first apps."

Rocket Insights joins other founding PathCheck Alliance members, including Intel, Red Hat, Akamai, NTT DATA, Maximus, Extreme Solutions, AIO Digital, NextGenSys, Noveltech, RISE, KIOS, Nuland and Thoughtbot.

About Rocket Insights

Part of Dept, Rocket Insights is the fastest growing product agency in the United States, focused on creating beautiful apps for Mobile, Voice and the Web. Learn more at http://www.rocketinsights.com.

About PathCheck Foundation

Spun out of MIT, PathCheck Foundation is a nonprofit organization dedicated to containing the pandemic and revitalizing economies while preserving individual privacy and liberty. PathCheck builds open source software, advises health authorities and private sector organizations, and provides research and insights related to digital contact tracing, exposure notification, and digital public health solutions. With significant charitable funding, 1,800 volunteers, and a core team of technology and public health professionals, PathCheck Foundation is the leading nonprofit dedicated to creating technology solutions to stop the pandemic and build safe, healthy, resilient communities.

Press Contact: Kristin Cronin, Head of Marketing, Email: [emailprotected]

SOURCE Rocket Insights

https://www.rocketinsights.com

Link:
Rocket Insights Joins PathCheck Foundation's Global Partner Program to Develop Open Source Software to Help Contain the Pandemic - PRNewswire

The popularity of EspoCRM is increasing due to the rise of digital transformation trends – PRNewswire

Apart from establishing new standards of living, the crisis has undoubtedly put the digital technologies at the forefront of the modern economy. As a result, in addition to implementing quarantine measures and mobilizing their resources, countries worldwide have encouraged businesses to reorganize their working process and to set a course for digitization. A prime example is Germany's strategy, where the local government has established programs to support businesses in these difficult times, promptly putting innovative digital solutions into practice.

In response to the situation, many companies have started making their first moves towards digitization by adopting open-source solutions to keep their businesses afloat. As practice shows, more often than not, the businesses resort to open-source CRM implementation. The reason for this lies in its functionality and affordability. Open-source CRM software does not require a huge investment, and can be customized to fit the needs of a particular industry. Along with providing a broad set of features for sales and marketing automation, customer support, and accurate reporting, the platforms also allow companies to organize distance work in the short-term, mitigating any hardships they encounter, and keep up with their projects while staying safe and healthy.

The modern IT market abounds with open-source solutions, with more than 10 open-source CRM projects. The source code, new releases, update packages, and bug fixes are published on such popular platforms as GitHub and SourceForge, where they can be downloaded for free by anyone. Many open-source CRMs have already become household names, widely recognizable all over the globe; some are young and dynamically evolving. One of the examples of an actively developing open-source project with an enthusiastic community behind it is EspoCRM. It is a comprehensive customer relationship management solution that ensures a collaborative environment across your company's various departments, providing a complete, shared view of customers, leads, projects, and more. The software is regularly updated, and its new version6.0.0has been announced to be released in October 2020.

There is no doubt that this crisis has accelerated the digital transformation of business. Companies were forced to react quickly and start playing the game according to new rules. Implementing open-source software provides businesses an opportunity not only to survive the storm but also to gain the extra points needed to remain on top in the post-COVID-19 world.

About EspoCRM

EspoCRM is an open source customer relationship management (CRM) tool developed by Yuri Kuznetsov, Oleksii Avramenko, and Taras Machyshyn. The initial stable release of EspoCRM was on May 23th, 2014. As open source software, it is available under GPLv3 (GNU General Public License version 3). EspoCRM was written in PHP and JavaScript; it is designed to be used on multiple operating systems, as it is a cross-platform software.

Press Contact:

Oleksii Avramenko 408-320-0380 https://www.espocrm.com/

SOURCE EspoCRM

See the rest here:
The popularity of EspoCRM is increasing due to the rise of digital transformation trends - PRNewswire